Skip to content

ISO/IEC 42001:2023 Controls

This document contains both custom and reference controls designed to help users achieve compliance with ISO/IEC 42001:2023 regulations. Below, you will find detailed explanations and guidelines on how to fulfill these controls, complete with examples and definitions. This product is specifically designed to assist organizations in meeting their compliance objectives and managing risks associated with the design and operation of AI systems.

Validaitor Custom Controls

The controls listed below have been developed by Validaitor to assist organizations in achieving compliance with ISO/IEC 42001:2023 regulations. While the ISO document provides reference controls, we believe these custom controls are essential for ensuring comprehensive compliance. These controls are designed to help organizations meet their objectives and manage risks associated with the design and operation of AI systems.

Context of the Organization

V.1.1 Understanding the Organization and its Context

Understanding the organization and its context.

Determine Internal Issues

Assess and document internal factors that may influence the organization’s AI strategies, performance, and compliance. This analysis helps identify elements that could impact the effectiveness of the AI management system and supports informed governance and strategic planning.

How to Fulfill in Validaitor Platform

  • List Response: Users should provide detailed descriptions for each internal issue to clarify its relevance to their AI system. This description should explain why the issue is significant to the organization and how it impacts the intended purpose and goals of the AI system.

  • Users can select default internal issues provided on the platform but are encouraged to customize the descriptions to align with their organization’s specific context.

  • Each description should answer:
    • Why is this internal issue relevant to our AI system?
    • How does this issue affect the intended purpose or goals of our AI system?

Example Steps to Identify Some Internal Issues

Assess Organizational Policies and Objectives

  • What to Do: Review your organization’s AI-related objectives, policies, and governance practices.
  • How to Decide on Relevance: If internal policies or governance structures affect your ability to achieve AI-related goals, list this as a relevant issue.
    • Example: Our AI governance policy mandates risk assessments, which impact how we plan and test AI systems before deployment.

Evaluate Resource Availability

  • What to Do: Look at the organization’s available resources, including technical skills, budget, and infrastructure.
  • How to Decide on Relevance: If resource limitations influence AI development, usage, or maintenance, this should be documented as an issue.
    • Example: We lack sufficient computing resources for large-scale AI model training, limiting our capacity for high-complexity projects.

Identify Contractual Obligations

  • What to Do: Examine any contracts or partnerships that impose specific requirements or restrictions on AI development or deployment.
  • How to Decide on Relevance: If these obligations shape your AI management system or limit flexibility, list this as a relevant issue.
    • Example: Our contract with a client requires us to meet strict ethical guidelines for AI, impacting our AI model selection and data usage policies.

Clarify the Intended Purpose of AI Systems

  • What to Do: Define each AI system’s purpose, objectives, and alignment with the organization’s mission.
  • How to Decide on Relevance: If an AI system’s purpose influences decision-making, resource allocation, or priorities, document this as an issue.
    • Example: Our AI system is designed to enhance customer service automation, and we must balance user experience improvements with maintaining personal interaction quality.
Determine External Issues

Assess and document external factors that may influence the organization’s AI strategies, performance, and compliance. This analysis helps identify elements that could impact the effectiveness of the AI management system and supports informed governance and strategic planning.

How to Fulfill in Validaitor Platform

  • List Response: Users should provide detailed descriptions for each external issue to clarify its relevance to their AI system. This description should explain why the issue is significant to their organization and how it impacts the intended purpose and goals of the AI system.

  • Users can select default external issues from the platform, but they are encouraged to modify the description to suit their organization’s specific context.

  • Each description should answer:
    • Why is this external issue relevant to our AI system?
    • How does this issue affect the intended purpose of our AI system?

Example Steps to Identify Some External Issues

Review Regulatory and Legal Requirements

  • What to Do: Examine regulations, guidelines, and restrictions specific to your industry or region. Identify any rules on AI development, usage, or data protection that apply to your organization.
  • How to Decide on Relevance: If this regulation impacts how you design, develop, or deploy your AI systems, document it as a relevant issue.
    • Example: Our AI systems handle personal data, so GDPR compliance is essential to avoid legal consequences and protect user privacy.

Analyze Market and Competitive Landscape

  • What to Do: Look at current AI trends, competitor strategies, and customer demands within your industry.
  • How to Decide on Relevance: If market pressures or customer expectations shape how you need to develop or deliver AI solutions, list this as an issue.
    • Example: To remain competitive, we need to adopt transparent AI models, as customers and regulators demand explainability in AI decision-making.

Consider Cultural and Ethical Factors

  • What to Do: Identify cultural, ethical, or social attitudes that may affect the adoption or perception of your AI solutions.
  • How to Decide on Relevance: If societal values influence how you operate or deploy AI, document this as an issue.
    • Example: Public concern over AI fairness requires us to integrate ethical AI practices to build trust and avoid bias in our AI systems.

Evaluate Environmental Impacts

  • What to Do: Assess how climate change and environmental considerations influence your AI systems' operations or goals.
  • How to Decide on Relevance: If environmental impact is a significant factor in your organization’s strategy, list it as an issue.
    • Example: Our high-power AI models contribute to energy consumption, so we aim to reduce our carbon footprint by optimizing our computational resources.
Determine Climate Change Relevant Issue

Evaluate and document whether climate change is a relevant consideration for your organization's AI systems. This involves assessing how climate change may impact or be impacted by the development, deployment, and operations of AI systems, including energy consumption, emissions, and alignment with sustainability initiatives.

If climate change is found to be relevant, describe how it influences your AI management practices, such as adopting greener technologies, reducing emissions, or aligning with sustainability goals.

How to Fulfill in Validaitor Platform:

  • Free Text Response: For each AI system, if climate change is relevant, provide a detailed description of how it affects your AI management practices in the provided text box. Refer to the examples provided above for guidance.

Example Steps to Identify Climate Change Relevance

Assess Environmental Impact of AI Systems:

  • What to Do: Evaluate the environmental footprint of AI systems, considering factors like energy consumption, emissions, and resource use.
  • How to Decide on Relevance: If your AI systems require significant computing resources, which contribute to emissions or energy usage, climate change is likely a relevant issue. Document this finding to address potential environmental impacts.
    • Example: Our AI systems consume high levels of energy, particularly during model training, making it essential to monitor and reduce their carbon footprint.

Review Climate-Related Regulations and Sustainability Policies:

  • What to Do: Check for climate-related regulations or sustainability policies that apply to your organization or industry. These might include carbon reduction targets, renewable energy commitments, or environmental impact assessments.
  • How to Decide on Relevance: If these regulations or policies affect your AI system operations, list climate change as a relevant issue and document associated responsibilities or goals.
    • Example: Our corporate sustainability policy requires us to limit carbon emissions from energy-intensive operations, affecting the design and deployment of AI models.
Evaluate Purpose of AI Systems

Clearly defining and documenting the purpose and objectives of each AI system helps ensure that each system aligns with the organization’s goals, ethical standards, and operational requirements. This process involves reviewing the registry of AI systems and specifying the intended purpose for each. Having a well-documented purpose for each AI system provides clarity on its role within the organization, helps maintain focus on intended outcomes, and ensures alignment with regulatory and ethical standards.

Steps to Identify and Document the Purpose of AI Systems

  • What to Do: Review each AI system’s role within the organization and define its primary objective for deployment.
  • How to Decide on Relevance: If an AI system serves a key organizational need, process, or goal, record its intended purpose in the Validaitor platform to ensure alignment with your organizational strategy.
  • Examples
    • To improve customer experience by providing personalized recommendations and reducing decision-making time for users.
    • To streamline internal workflows and reduce administrative time by automating repetitive tasks in customer support.
    • To assist medical staff in diagnosing conditions by providing preliminary analysis of medical imaging data, aiming to increase diagnosis speed and accuracy.

How to Fulfill in Validaitor Platform:

  • Free Text Response: For each AI system, clearly define its purpose and enter it in the provided text box.
Determine Organization's Role Respect AI System

Identifying and documenting the organization’s specific roles and responsibilities with respect to each AI system clarifies accountability and supports effective AI governance. Roles may include developer, operator, user, or relevant authority, depending on the level of involvement in creating, managing, or using the AI system.

This documentation helps ensuring each role aligns with compliance, ethical standards, and operational goals. Providing a dropdown for each AI system allows users to accurately specify their responsibilities for each system.

Steps to Identify Organization’s Role in Relation to Each AI System

  • What to Do: Determine the main role your organization plays in relation to each AI system, whether as a provider, producer, customer, or relevant authority.
  • How to Fulfill in Validaitor Platform:
    • Assign Roles:
      • AI Provider: Responsible for supplying AI solutions or services to end-users or organizations.
      • AI Producer: Creates and develops AI systems, including designing algorithms, training models, and updating functionalities.
      • AI Importer: Brings AI systems or components into a region or organization from an external source or country, ensuring compliance with local regulations.
      • AI Deployer: Implements and sets up AI systems within an organization, ensuring they are correctly configured and operational.
      • AI Distributor: Distributes AI systems to customers or clients, managing delivery and ensuring accessibility of the AI solutions.
      • AI Customer: Utilizes the AI system as an end-user, leveraging it for business operations or specific tasks.
      • AI Partner: Collaborates on the development, deployment, or use of AI systems with shared responsibilities or objectives.
      • AI Subject: Refers to individuals or entities directly impacted by the AI system’s outputs, such as recipients of AI-generated decisions.
      • Relevant Authority: Monitors and regulates AI systems, ensuring they adhere to compliance, safety, and ethical standards.

V.1.2 Understanding the Needs and Expectations of Interested Parties

Determine Interested Parties

Organizations must identify and understand the need and expectation of parties that have a stake in their AI management system. These interested parties might have specific requirements, such as compliance expectations, ethical standards, or climate-related concerns. The organization must determine which requirements are relevant and, importantly, decide which of these requirements will be addressed by the AI management system to ensure alignment with stakeholders’ interests.

Steps to Determine Interested Parties, Requirements, and Relevant AI Management Actions

  • What to Do: List parties that are impacted by or have an influence on the organization’s AI management system. These could include customers, regulatory authorities, suppliers, employees, or environmental advocacy groups.

    Document specific requirements each interested party has. These might include data privacy, transparency, ethical standards, or, where relevant, climate change considerations.

    Decide which of the identified requirements will be actively incorporated into the AI management system.

  • How to Decide on Relevance: Consider if a party has requirements or expectations that directly affect AI operations, ethical guidelines, or compliance needs. If their needs impact AI decisions or goals, they are an interested party.

    • Example:
      • Interested Party Name: Regulatory Authority
      • Description: A government body enforcing AI regulations that impact our compliance requirements.
      • Requirements: Compliance with data privacy laws (GDPR) and transparency in AI decision-making processes.
      • Requirements Addressed: If a requirement is covered in the AI Management System, please click the checkbox next to the requirement name.

How to Fulfill in Validaitor Platform:

  • List Response: Users should identify and document their interested parties along with the relevant information mentioned above. The Validaitor platform provides a set of predefined interested parties and their associated requirements, commonly used by organizations. Users can click the "Add Default Interested Parties" button to include these predefined entries and then customize them to suit their specific needs.

V.1.3 Determining the Scope of the AI Management System

Determine Scope of AI Management System

Organizations must define the scope of their AI management system, which involves setting boundaries and clarifying applicability. The scope establishes which parts of the organization and AI activities are covered under the AI management system, specifying how it meets the organization’s goals, external and internal issues (as identified in control V.1.1), and the requirements of interested parties (as identified in control V.1.2). Documenting this scope ensures a clear alignment with organizational objectives, compliance requirements, and stakeholder expectations.

Steps to Define and Document the Scope of the AI Management System

  • What to Do: Establish the boundaries and applicability of the AI management system by outlining relevant organizational areas, AI activities, external and internal issues, and stakeholder requirements. Specify the AI system's coverage in terms of functions, departments, projects, and any associated compliance or performance needs.

How to Fulfill in the Validaitor Platform:

  • Upload or Create Document: You can either upload a completed document outlining these scope details or use our platform’s template document, which provides structured sections to address each of these requirements directly.

Planning

V.2.1 Actions to Adress Risks and Opportunities

Risk Assessment

To effectively manage the risks associated with AI systems, the organization must establish a process for identifying, analyzing, and evaluating potential risks. This includes assessing the likelihood of each risk, the potential impacts, and determining appropriate treatment strategies to reduce or mitigate adverse effects. A structured risk assessment approach ensures AI systems operate safely, ethically, and in compliance with organizational and regulatory requirements.

Steps to Identify, Analyze, and Evaluate Risks for AI Systems

  • What to Do: Create a comprehensive risk registry for each AI system that documents identified risks, their potential impacts, likelihoods, and chosen treatment strategies. Each entry should capture critical details of risk scenarios, possible consequences, and mitigation plans.

    Assess the impact level and likelihood for each risk scenario, noting how severe and probable each risk is. Risks with high impact and high likelihood scores should be prioritized and addressed with stronger mitigation strategies in the treatment plan.

    For each identified risk, select a treatment option that best mitigates or manages the risk. High-impact risks typically require mitigation or avoidance, whereas lower-impact risks may be acceptable with minimal intervention.

    To create a risk registery, one should complete the following fields:

    • AI System: Select or enter the specific AI system to which the risk applies.
    • Risk Scenario: Choose a predefined scenario from the dropdown or create a custom scenario that describes the risk.
    • Impact Type: Select the type of impact from predefined options (e.g., reputational, financial, other).
    • Impact Level: Rate the severity of the impact on a scale of 1-5, where 1 is very low and 5 is very high.
    • Likelihood to Occur: Estimate the likelihood of the risk occurring on a scale of 1-5, where 1 is very unlikely and 5 is very likely.
    • Treatment: Choose a treatment strategy from the dropdown options (e.g., avoid, transfer, mitigate, accept).
    • Treatment Explanation: Provide a brief explanation of the treatment strategy chosen and how it will mitigate or manage the identified risk.

How to Fulfill in Validaitor Platform:

  • Create Risk Registry: In the Validaitor platform, set up a risk registry by completing the fields explained above for each identified risk.

Reference Controls

The controls listed below serve as a reference to help the organization meet its objectives and manage risks associated with the design and operation of AI systems. These controls are flexible; the organization should only apply the controls that best fit its specific objectives and risk management needs.

A.2.2 AI Policy

Create AI Policy

The organization should establish a documented AI policy that guides the development, deployment, and use of AI systems. This policy aligns with the organization’s values, business strategy, and regulatory requirements, providing a foundation for responsible AI practices.

Steps to Develop a Comprehensive AI Policy

To create a comprehensive AI policy, start by establishing core principles and scope, informed by the organization’s business strategy, values, risk tolerance, and regulatory environment. The policy should outline guiding principles for all AI-related activities, address key organizational goals and risk management needs, and cover specific areas as necessary to provide additional guidance or cross-references to other policies.

What to Include:

  • Alignment with business strategy and organizational values
  • Consideration of risk levels posed by AI systems
  • Compliance with legal and contractual requirements
  • Risk environment and potential impact on relevant interested parties
  • Core principles guiding AI activities
  • A structured approach to setting and reviewing AI objectives
  • Assurance of compliance with legal and regulatory standards
  • A focus on continuous improvement for AI management practices
  • Topics for cross-references to other policies:
    • AI resources and asset management
    • AI system impact assessments
    • AI system development and procurement

How to Fulfill in Validaitor Platform:

  • Upload or Create AI Policy: Users can either upload a completed AI policy document that satisfies these requirements or use the platform’s AI policy template, which pre-structures the policy to align with the outlined requirements. If using the template, users only need to fill in specific details as prompted to complete the policy.

A.2.3 Alignment with other Organizational Policies

Many domains intersect with AI, including quality, security, safety, and privacy. To ensure a cohesive approach, the organization should analyze existing policies to identify areas where they intersect with AI objectives. Based on this analysis, current policies can be updated if needed, or provisions can be included directly in the AI policy.

Upload AI Policy

Upload policies that intersect with the organization’s AI policy, such as those addressing quality, security, safety, and privacy. Examples of policies to consider include:

  • GDPR Compliance Policy: Ensures AI systems align with data privacy regulations to protect user data and maintain compliance with data protection laws.
  • Information Security Policy: Outlines the organization’s approach to securing data and systems, especially relevant for AI systems handling sensitive or confidential information.
  • Quality Assurance Policy: Ensures that AI systems meet established quality standards and undergo rigorous testing to prevent defects or failures.
  • Health and Safety Policy: Relevant if AI systems are used in environments where they could impact human health or safety, ensuring safe deployment and use.
  • Ethics and Compliance Policy: Provides guidance on maintaining ethical standards in AI development, addressing fairness, transparency, and avoidance of bias.

How to Fulfill in Validaitor Platform

  • Upload Document: Users can upload documents for each policy that intersects with the AI policy, ensuring all relevant domains are represented and aligned.
Align AI Policy

Ensure alignment of the AI policy with the organization's strategic direction and objectives by reviewing and incorporating relevant aspects from intersecting policies. An alignment document should include the following sections:

  • Identify Intersecting Policies

    List the specific organizational policies that intersect with AI-related objectives, including quality, security, privacy, safety, ethics, and any additional policies that influence or are impacted by AI operations.

    Examples:

    • Data Privacy and Protection Policy
    • IT Security and Cybersecurity Policies
    • Sustainability and Environmental Policy
  • Assess Policy Updates or Integrations

    Determine if updates are needed in existing policies to address AI considerations or if provisions should be added to the AI policy itself to cover these intersections. Specify which policies require updates and describe any necessary modifications or references to align with AI objectives.

    Examples:

    • GDPR Compliance: Add specific AI-related data handling requirements.
    • Information Security: Include safeguards for AI models and data.
    • Quality Assurance: Integrate AI-specific testing protocols.

How to Fulfill in Validaitor Platform

  • Upload or Create Document: Users can either upload a completed alignment document or use the platform’s AI policy alignment template, which provides a structured format for incorporating necessary intersections and ensuring the AI policy aligns with organizational objectives and strategic goals.

Internal Organization

The objective of this section is to establish accountability within the organization to uphold its responsible approach for the implementation, operation and management of AI systems.

A.3.2 AI Roles and Responsibilities

Roles and responsibilities for AI should be defined and allocated according to the organization’s needs to ensure accountability throughout the AI system’s life cycle. Clearly assigning responsibilities across relevant areas is essential for effective governance, risk management, and compliance.

Assign Roles and Responsibilities

Define and document responsibilities and authorities for key roles within the AI management system. When assigning these roles, consider AI policies, objectives, and identified risks to ensure comprehensive coverage of all critical areas.

Key Roles to Assign:

  • Risk Management: Oversee AI-related risks, develop mitigation strategies, and ensure alignment with organizational risk policies.
  • AI System Impact Assessments: Conduct and review impact assessments to evaluate potential effects of AI systems on various stakeholders and environments.
  • Asset and Resource Management: Manage assets and resources required for the AI system, ensuring efficiency and availability.
  • Security: Oversee security protocols to protect AI systems and data from unauthorized access or breaches.
  • Safety: Ensure AI systems meet safety standards, particularly if they impact human health or operational safety.
  • Privacy: Implement and monitor privacy measures to protect sensitive data within the AI system.
  • Development: Lead or supervise AI system development, ensuring alignment with organizational standards and policies.
  • Performance: Monitor and evaluate AI system performance to maintain effectiveness and achieve organizational objectives.
  • Human Oversight: Provide human oversight to ensure AI systems operate within ethical and operational guidelines.
  • Supplier Relationships: Manage relationships with AI suppliers and ensure they meet quality and compliance standards.
  • Data Quality Management: Ensure data quality throughout the AI system's life cycle, from initial training to ongoing operations.

How to Fulfill in Validaitor Platform

  • Assign Roles: Users can assign these roles within the Validaitor platform by selecting an individual from the organization using a dropdown menu for each role and confirming the assignment by clicking the "Save" button at the bottom right.

A.3.3 Reporting of Concerns

The organization should define and implement a process for reporting concerns related to its role in AI system activities throughout the AI system’s life cycle. A well-structured reporting mechanism is essential for accountability and allows for timely resolution of issues.

Establish Concern Reporting Process

Develop and document a reporting process to address concerns regarding the organization’s involvement with AI systems. This process should ensure transparency, protect confidentiality, and provide clear pathways for escalation when necessary.

Steps to Establish a Concern Reporting Process

  • Define a Reporting Process

    • Outline a clear process for individuals to report concerns regarding AI systems, covering how, where, and to whom reports should be submitted. Ensure the process is accessible and actively promoted among employees and contracted individuals.(e.g. Employees can report concerns via a dedicated email address or an online portal accessible through the company intranet.)
  • Ensure the reporting mechanism includes the following features:

    • Confidentiality and Anonymity: Provide options for individuals to report anonymously or confidentially to encourage openness without fear of reprisals.
    • Availability and Promotion: Ensure the reporting mechanism is widely available to all staff, including contracted personnel, and actively promoted within the organization.
    • Qualified Staffing: Staff the reporting channel with qualified personnel capable of handling reports effectively.
    • Investigation and Resolution Powers: Define investigation and resolution powers for those managing the reports to ensure issues are addressed adequately.
    • Escalation Mechanism: Include mechanisms for timely escalation of concerns to appropriate management levels.
    • Protection from Reprisals: Offer protections for individuals reporting concerns and those involved in investigations to prevent retaliation.
    • Timely Response: Define response times for addressing concerns to ensure issues are handled within an appropriate timeframe.
    • Confidential Reporting and Business Confidentiality: Maintain confidentiality and anonymity where needed, respecting business confidentiality considerations.

How to Fulfill in Validaitor Platform:

  • Upload or Create Document: Users can either upload a completed document detailing the concern reporting process or use the platform’s reporting process template, which includes both the defining a reporting process and the reporting mechanism functions. In the template, users only need to fill specific sections to tailor the process to their organization.

Resources for AI Systems

To ensure that the organization accounts for the resources (including AI system components and assets) of the AI system in order to fully understand and address risks and impacts.

A.4.2 Resource Documentation

The organization should identify and document the resources required for various stages of the AI system life cycle and other AI-related activities. Documenting these resources is essential to assess risks, understand potential impacts, and support informed decision-making throughout the AI system’s life cycle.

It is recommended to complete the following requirements (A.4.3, A.4.4, A.4.5, and A.4.6) before addressing this one.

Determine Resource Needs

Identify the resources necessary for the AI management system, considering all stages of the AI system’s life cycle and any additional AI-related activities within the organization. Resource identification should address both internal and external sources as applicable.

How to Fulfill in Validaitor Platform

  • Automatically Fulfilled: Users are not required to take any additional actions for this requirement. Following controls A.4.3, A.4.4, A.4.5, and A.4.6 will inherently fulfill this requirement.
Document Resource Needs

Document the identified resources below in detail, ensuring clarity on how each resource supports the AI management system and life cycle activities. This documentation may use data flow diagrams, system architecture diagrams, or other relevant formats to illustrate resource allocation and flow.

Resources to Document:

  • AI System Components: Core components necessary to build, operate, and maintain AI systems.

    • Example: The AI system includes a recommendation engine, data ingestion pipeline, and API services for real-time data processing.
  • Data Resources: Data used at any stage of the AI system life cycle, including training, validation, and deployment data.

    • Example: Training data sourced from customer interaction logs, validation data obtained from sample test sets, and production data processed in real-time from live user interactions.
  • Tooling Resources: AI algorithms, models, and tools necessary for AI development, operation, and maintenance.

    • Example: The system relies on TensorFlow for model development, Keras for building neural networks, and scikit-learn for data preprocessing and evaluation.
  • System and Computing Resources: Hardware and storage needed to run AI models, including computing power and storage capacity for data and tooling.

    • Example: NVIDIA GPUs for model training, AWS EC2 instances for scalable deployment, and Amazon S3 for data storage.
  • Human Resources: Personnel with required expertise for AI development, sales, training, operation, and maintenance, according to the organization’s role across the AI system life cycle.

    • Example: A team of data scientists and ML engineers for development, along with a deployment team experienced in DevOps practices and system monitoring.

How to Fulfill in Validaitor Platform:

  • Upload or Create Document: Users can either upload a completed document that contains all relevant resource details or use the Validaitor platform’s template, which pre-structures these sections. If using the template, users will need to fill in organization-specific details for each resource type (e.g., AI system components, data resources, tooling, system, and human resources) to support a thorough and comprehensive AI impact assessment.

A.4.3 Data Resources

As part of resource identification, the organization should document detailed information about the data resources utilized for each AI system. Comprehensive documentation of data resources is crucial to support transparency, quality control, and effective AI system management.

Determine AI System Data Resources

Identify and document the data resources required for each AI system, addressing critical aspects to ensure data quality, provenance, and intended use.

Information to Document for Data Resources with Examples

  • Data Provenance: The origin of the data, including the source and acquisition method.

    • Example: Data sourced from publicly available government datasets accessed via an open data portal, as well as proprietary data acquired through a licensed third-party provider on 2023-05-15.
  • Last Updated Date: The date the data were last updated or modified, typically found in metadata.

    • Example: The data was last updated on 2024-02-01, based on metadata tags, to ensure it reflects the most current trends and patterns.
  • Data Categories: Relevant data categories, such as training, validation, testing, or production data (especially for machine learning systems).

    • Example: The dataset includes:
      • Training Data: Used for developing and training machine learning models.
      • Validation Data: Applied to tune model parameters and prevent overfitting.
      • Testing Data: Used for final evaluation of model performance prior to deployment.
      • Production Data: Real-time data utilized by the model post-deployment.
  • ISO Categories: Data classification according to standards (e.g., ISO/IEC 19944-1).

    • Example: Classified under "Personal Data" and "Derived Data" as per ISO/IEC 19944-1, with additional tags indicating sensitive and non-sensitive data types.
  • Data Labelling Process: Procedures used for data labeling, including criteria and quality checks.

    • Example: Data was labeled through a combination of automated and manual processes, following set criteria to ensure consistency. Each label was reviewed for accuracy by a team of data annotators to maintain uniformity across the dataset.
  • Intended Use: The specific purpose for which the data is intended within the AI system.

    • Example: The data is intended to drive a predictive model for user behavior analysis, supporting personalized product recommendations within an e-commerce platform.
  • Data Quality: Measures or standards that indicate data quality (e.g., ISO/IEC 5259 series).

    • Example: Data quality was assessed using completeness, accuracy, and consistency checks based on ISO/IEC 5259 guidelines. Regular audits are conducted quarterly to ensure data integrity remains high.
  • Retention and Disposal Policies: Policies governing how long data is retained and the procedures for its disposal.

    • Example: Data is retained for up to two years after collection, after which it is securely deleted per the organization’s data disposal procedures. Any exceptions to this policy require documented approval from the data governance team.
  • Bias Considerations: Any known or potential biases in the data that could impact AI performance or fairness.

    • Example: The dataset exhibits a demographic bias towards younger urban populations. To mitigate this, additional data from rural demographics has been incorporated to achieve a balanced representation.
  • Data Preparation: Any preprocessing steps, transformations, or cleaning methods applied to the data.

    • Example: Preprocessing steps included removing duplicates, standardizing values, filling in missing data, and normalizing fields to ensure uniform inputs. Categorical data was encoded as part of preparing the data for machine learning models.

How to Fulfill in Validaitor Platform

  • Free Text Response: Users can enter detailed information about the data resources in a text box for each AI system. Include relevant details from the topics above to ensure comprehensive documentation of data resources as part of the AI management system.

A.4.4 Tooling Resources

As part of resource identification, the organization should document detailed information about the tooling resources utilized for each AI system. Proper documentation of tooling resources supports effective system management, transparency, and operational efficiency.

Determine AI System Tooling Resources

Identify and document the tooling resources required for each AI system, including software, hardware, and processes essential for system development, optimization, and deployment.

Tooling Resources to Document with Examples:

  • Algorithm Types and Machine Learning Models: The types of algorithms and models used within the AI system.

    • Example: The AI system uses a convolutional neural network (CNN) for image classification, combined with a decision tree model for data filtering.
  • Data Conditioning Tools or Processes: Tools or processes used to prepare and preprocess data before it is fed into the model.

    • Example: Data conditioning includes the use of Pandas for data cleaning, NumPy for data formatting, and scikit-learn preprocessing methods to normalize and standardize data inputs.
  • Optimization Methods: Methods or tools used to optimize model performance, training speed, or resource utilization.

    • Example: The AI system leverages Stochastic Gradient Descent (SGD) for model optimization and Hyperopt for hyperparameter tuning to improve model accuracy and efficiency.
  • Evaluation Methods: Techniques or tools used to evaluate the performance and accuracy of the AI system.

    • Example: Evaluation methods include cross-validation with scikit-learn for performance testing, as well as ROC-AUC scores for classification models to assess accuracy and predictive power.
  • Provisioning Tools for Resources: Tools or platforms used to allocate and manage resources needed for AI system operation.

    • Example: Kubernetes is used for resource provisioning to manage computing resources, while AWS EC2 instances handle scalability requirements for training and deployment.
  • Tools to Aid Model Development: Tools that support the design, testing, and improvement of AI models.

    • Example: TensorFlow and PyTorch are utilized for model development, with Jupyter Notebooks used to experiment and document model iterations.
  • Software and Hardware for AI System Design, Development, and Deployment: Specific software and hardware used throughout the AI system’s life cycle.

    • Example: NVIDIA GPUs are used for model training due to high computational needs, while Docker containers manage deployment. The system also relies on Ubuntu OS for development and Git for version control.

How to Fulfill in Validaitor Platform

  • Free Text Response: Users can document tooling resources for each AI system in a text box, detailing each category above where applicable. This documentation helps provide a complete view of the resources supporting AI system performance and maintenance.

A.4.5 System and Computing Resources

As part of resource identification, the organization should document detailed information about the system and computing resources utilized for each AI system. This documentation helps ensure that the AI system meets operational needs and is managed efficiently across development, deployment, and maintenance stages.

Determine AI System Computing Resources

Identify and document the computing resources required for each AI system, including details on hardware, processing, location, and environmental impact.

System and Computing Resources to Document with Examples:

  • Resource Requirements of the AI System: Specifications needed to ensure the AI system can function properly, particularly on devices with constrained resources.

    • Example: The AI system requires a minimum of 8 GB of RAM, a quad-core processor, and 100 GB of storage to operate optimally on both local devices and cloud servers.
  • Location of System and Computing Resources: The physical or virtual location where the AI system and its resources are hosted, such as on-premises, in the cloud, or at the edge.

    • Example: The system is hosted on AWS Cloud for scalable computing and storage, with edge computing capabilities enabled for real-time data processing at remote locations.
  • Processing Resources: The specific network, CPU, GPU, and storage resources needed to run the AI system.

    • Example: The AI system utilizes NVIDIA A100 GPUs for model training, Intel Xeon CPUs for inferencing, and S3 storage for large dataset storage, with a 1 Gbps network bandwidth for data transfer.
  • Impact of Hardware Used: The environmental impact and cost considerations associated with the hardware used to run the AI system.

    • Example: The organization uses energy-efficient NVIDIA GPUs to reduce power consumption during training. Hardware is selected from sustainable sources where possible, and cloud resources are monitored to optimize cost and minimize environmental impact.
  • Continual Improvement Requirements: Additional resources necessary to support ongoing improvement, development, and updates of the AI system.

    • Example: GPU upgrades and additional storage capacity are projected for future development phases to support continual model refinement and data expansion as the system evolves.

How to Fulfill in Validaitor Platform:

  • Free Text Response: Users can enter detailed information about system and computing resources in a text box for each AI system, covering the categories listed above. This documentation provides a comprehensive overview of the resources required to support the AI system effectively throughout its life cycle.

A.4.6 Human Resources

As part of resource identification, the organization should document detailed information about the human resources and their competencies required throughout the AI system life cycle. This documentation ensures that the organization has the necessary expertise for each stage, from development to decommissioning, and supports effective oversight and integration.

Determine AI System Human Resources

Identify and document the human resources required for each AI system, including roles, expertise, and competencies necessary for development, deployment, operation, maintenance, and more.

Human Resources to Document with Examples:

  • Data Scientists: Experts who handle data processing, cleaning, and model training to support AI system development.

    • Example: A team of data scientists proficient in Python and machine learning libraries like TensorFlow and scikit-learn is required to process data and train models effectively.
  • Roles Related to Human Oversight of AI Systems: Personnel responsible for monitoring and ensuring the safe and ethical operation of AI systems.

    • Example: Human oversight roles include AI system auditors who monitor the model’s behavior post-deployment to detect bias or unexpected outcomes, ensuring responsible AI practices.
  • Experts on Trustworthiness Topics (Safety, Security, Privacy): Specialists who ensure that AI systems comply with safety, security, and privacy standards.

    • Example: Privacy officers and security analysts are needed to verify that the AI system adheres to GDPR requirements and cybersecurity protocols, safeguarding data integrity and user privacy.
  • AI Researchers and Specialists: Professionals with deep knowledge in AI and machine learning who drive innovation and improve model accuracy and performance.

    • Example: AI researchers with expertise in reinforcement learning and natural language processing are involved in refining model algorithms and implementing advanced AI techniques.
  • Domain Experts Relevant to the AI Systems: Specialists in the specific industry or field where the AI system is deployed, ensuring relevance and accuracy.

    • Example: Medical professionals collaborate with the AI team to guide the development of an AI diagnostic tool, ensuring it aligns with clinical standards and practices.
  • Competencies Across Life Cycle Stages: Identify specific competencies needed for different phases, such as deployment, change management, and decommissioning.

    • Example: Engineers with experience in DevOps oversee AI deployment, while IT specialists manage system decommissioning to ensure proper disposal of sensitive data and adherence to security standards.

How to Fulfill in Validaitor Platform

  • Free Text Response: Users can enter detailed information about the required human resources in a text box for each AI system, specifying roles, expertise, and competencies based on the categories above. This documentation provides a comprehensive view of the human resources supporting the AI system throughout its life cycle.

Assessing Impacts of AI Systems

To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.

A.5.2 AI Impact Assessment

The organization should establish a process to assess the potential impacts of AI systems on individuals, groups, and society across the AI system’s life cycle. This process helps identify, evaluate, and mitigate potential negative consequences, supporting responsible AI development and deployment.

Conduct AI Impact Assessment

Establish a documented process for assessing the impact of AI systems on individuals, groups, and society, covering critical elements and ensuring thorough evaluation and management of potential risks.

Key Considerations for AI Impact Assessment:

  • Potential Impact Areas: Assess whether the AI system affects:

    • Legal Position or Life Opportunities: For example, AI decisions that influence job opportunities, access to services, or legal outcomes.
    • Physical or Psychological Well-being: Systems that could affect health, safety, or emotional state.
    • Universal Human Rights: Consider if the AI system impacts rights like privacy, freedom of expression, or equality.
    • Societal Impact: Evaluate broader effects on society, such as fairness, social stability, or cultural values.
  • When to Perform an AI Impact Assessment: Conduct assessments based on factors like:

    • Intended Purpose and Context: For high-stakes applications, such as medical diagnostics or legal decision-making.
    • Complexity and Automation Level: For highly automated or complex AI technologies.
    • Data Sensitivity: If sensitive data types (e.g., health, financial) are processed.
  • Core Elements of the Assessment Process:

    • Identification: Identify sources, events, and potential outcomes related to the AI system.
    • Analysis: Analyze possible consequences and likelihood of occurrence.
    • Evaluation: Prioritize and decide on acceptance of risks.
    • Treatment: Implement mitigation measures as needed.
    • Documentation and Reporting: Document findings, communicate results, and report as needed.
  • Utilization of Assessment Results: Use findings to inform the AI system's design and usage, guide approvals and reviews, and ensure alignment with safety, privacy, and ethical standards.

Example: An AI impact assessment for a facial recognition system might include evaluating the potential for bias in recognition accuracy, assessing privacy impacts on individuals, considering societal implications for surveillance practices, and implementing safeguards to address these issues.

How to Fulfill in Validaitor Platform

  • Upload Document: Users can upload a completed AI impact assessment document covering the considerations and elements listed above. The document should detail the process, impact areas, and any mitigation steps implemented to address potential risks associated with the AI system.

A.5.3 Documentation of AI System Impact Assessments

The organization should document and retain the results of AI system impact assessments for a defined period. This documentation ensures that information about AI impacts is available for internal reference, user communication, and updates as needed.

Determine AI Impact Assessment

Identify the necessary impact assessments for AI systems regarding their effects on individuals, groups, and society.

How to Fulfill in Validaitor Platform

  • Automatically Fulfilled: This step will be automatically fulfilled if requirements for A.5.4 (Impact on Individuals and Groups) and A.5.5 (Impact on Society) are completed within the platform, so no additional action is needed.
Document AI Impact Assessment

Document the results of the impact assessments for each AI system, ensuring the information is comprehensive and includes relevant considerations.

Items to Include in Documenting Impact Assessment Results:

  • Intended Use and Foreseeable Misuse: Document the intended purpose of the AI system and any reasonably foreseeable misuse scenarios.

    • Example: A facial recognition AI system intended for secure access control but potentially misused for unauthorized surveillance.
  • Positive and Negative Impacts: Describe the system’s potential positive and negative effects on individuals, groups, and society.

  • Example: Positive impact in improving security, with a potential negative impact if biases in recognition accuracy affect certain demographic groups.

  • Predictable Failures and Mitigation Measures: Outline predictable failure scenarios, their possible impacts, and mitigation steps taken.

    • Example: A predictive analytics model that may produce inaccurate results during data outages; mitigation includes implementing alerts and data quality checks.
  • Relevant Demographic Groups: Identify demographic groups affected by the system, especially if system accuracy varies across groups.

    • Example: An AI diagnostic tool primarily used for individuals aged 50+ in rural regions; accuracy and relevance verified for this demographic.
  • System Complexity: Document the complexity level of the AI system, particularly if it involves highly automated or intricate processes.

    • Example: A multi-layered neural network with high-level automation, requiring specialized expertise to interpret results.
  • Human Oversight and Intervention: Describe the role of humans in overseeing the system, including tools and processes to avoid negative impacts.

    • Example: Human moderators who review AI-generated alerts and can override decisions as needed for quality control.
  • Employment and Staff Skilling: Include any workforce implications, such as training needs or new roles to support the AI system.

    • Example: Required training for customer service staff on handling interactions influenced by the AI recommendation system.

How to Fulfill in Validaitor Platform

  • Upload Document: Users can upload a file containing the assessment results with the details specified above. The document should clearly outline intended uses, potential impacts, predictable failures, and other considerations to support responsible management of AI system impacts.

A.5.4 Assessing AI System Impact on Individuals or Groups of Individuals

The organization should assess and document the potential impacts of AI systems on individuals or groups throughout the AI system’s life cycle. This assessment supports responsible AI practices by ensuring the AI system aligns with organizational governance, policies, and ethical considerations.

Assess AI System Impact on Individuals or Groups of Individuals

Evaluate the impact of AI systems on individuals or groups, considering both positive and negative outcomes. This includes addressing specific protection needs and expectations related to trustworthiness, fairness, and safety.

Areas of Impact to Consider:

  • Fairness: Evaluate if the AI system is fair to all individuals or groups, avoiding discrimination or bias.
  • Accountability: Ensure accountability for the system’s decisions and actions, allowing for oversight and governance.
  • Transparency and Explainability: Assess the extent to which the AI system's processes are understandable and explainable to users.
  • Security and Privacy: Address security risks and protect personal information processed by the AI system.
  • Safety and Health: Consider any potential impacts on the physical or mental health of individuals.
  • Financial Consequences: Evaluate any potential financial impacts on individuals, such as cost savings or financial risks.
  • Accessibility: Ensure the AI system is accessible to all relevant users, including individuals with disabilities.
  • Human Rights: Assess whether the system respects fundamental human rights, including privacy, freedom of expression, and equality.

How to Fulfill in Validaitor Platform

For each AI system, users should:

  1. Select the Lifecycle Stage of the AI system (e.g., Plan and Design, Deploy and Use).
  2. Choose Impact On: Select whether the impact is on "Individuals" or "Groups of Individuals."
  3. Select Impact Type: Choose whether the impact is "Positive" or "Negative."
  4. Potential Impacts: Select a predefined impact from the dropdown list or type a custom impact if it’s not listed. For more details about potential impacts, click the "?" icon.
  5. Set Impact Level: Choose an impact level from 1 (low) to 5 (high) to indicate the severity of the impact.
  6. Optional Description: Add additional details describing the specific impact.

Click Add to assign this impact assessment to the corresponding AI system and lifecycle stage.

Example: For a healthcare AI diagnostic tool:

  • Lifecycle Stage: Operate and Monitor
  • Impact On: Individuals
  • Impact Type: Positive
  • Potential Impact: Improved diagnosis accuracy
  • Impact Level: 4 - High
  • Description: The AI system aids in early diagnosis of diseases, potentially increasing treatment success rates.

A.5.5 Assessing Societal Impacts of AI Systems

The organization should assess and document the potential societal impacts of their AI systems throughout the system's life cycle. This assessment includes both positive and negative impacts on areas like the environment, economy, government, health, and cultural values.

Assess Societal Impacts of AI Systems

Evaluate the potential societal impacts of AI systems, considering both beneficial and detrimental outcomes, to support responsible deployment and alignment with organizational goals.

Examples of Societal Impact Areas to Consider:

  • Environmental Sustainability: Assess impacts on natural resources, energy consumption, and greenhouse gas emissions.

    • Example: An AI system for optimizing logistics may reduce carbon emissions by improving transportation efficiency.
  • Economic Impact: Evaluate effects on employment, access to financial services, and economic growth.

    • Example: An AI-driven credit scoring system could improve financial inclusion by offering loans to underserved populations.
  • Government and Politics: Consider the implications for legislative processes, national security, and misinformation.

    • Example: AI tools for content generation could potentially create deepfakes, impacting public trust and political stability.
  • Health and Safety: Analyze impacts on healthcare access, treatment quality, and potential safety risks.

  • Example: AI-based diagnostic tools may improve early disease detection, enhancing public health outcomes.

  • Norms, Traditions, Culture, and Values: Reflect on how AI affects societal values and cultural norms, including potential biases.

    • Example: A language model trained on biased data may unintentionally reinforce stereotypes, impacting cultural values.

How to Fulfill in Validaitor Platform

For each AI system, users should:

  1. Select the Lifecycle Stage where the impact assessment is relevant (e.g., Plan and Design, Deploy and Use).
  2. Choose Impact On: Set to "Society Impacts" to indicate societal-level effects.
  3. Select Impact Type: Choose "Positive" or "Negative" to denote the nature of the impact.
  4. Potential Impacts: Choose a relevant societal impact from the dropdown list or enter a custom impact (e.g., environmental sustainability, economic impact). For more details about potential impacts, click the "?" icon.
  5. Set Impact Level: Assign an impact level from 1 (very low) to 5 (very high) to quantify the societal effect.
  6. Optional Description: Provide additional details to clarify the impact assessment.

Click Add to save this societal impact assessment for the corresponding AI system and lifecycle stage.

Example: For an AI system used in healthcare:

  • Lifecycle Stage: Deploy and Use
  • Impact On: Society Impacts
  • Impact Type: Positive
  • Potential Impact: Health and Safety
  • Impact Level: 5 - High
  • Description: AI system supports accurate and early diagnosis, improving population health outcomes and access to healthcare.

AI System Lifecycle

To ensure that the organization identifies and documents objectives and implements processes for the responsible design and development of AI systems.

A.6.1.2 Define and Document AI Objectives

The organization should define and document clear objectives for responsible AI development. These objectives guide the design, development, and deployment processes to align with the organization’s values and ethical standards.

Define AI Objectives

Establish and document specific AI objectives, taking into account their impact on the AI system's design, data handling, model training, validation, and overall development process. Ensure that these objectives are integrated into each stage to promote responsible AI development.

How to Fulfill in Validaitor Platform:

  • List Response:
    1. AI Objective Name: Select an objective name from the dropdown list or create a new objective that aligns with the organization’s goals.
    2. AI Objective Description: Describe how this objective will shape the AI system's development, implementation, and management. The description should include:
      • The relevance of the objective to the organization’s values and goals.
      • Specific areas where the objective will influence the AI system, such as data acquisition, model training, deployment, and user interaction.
      • Guidelines or requirements for achieving the objective, such as using specific methods or tools to ensure alignment (e.g., fairness testing tools for a “Fairness” objective).

Example:

  • AI Objective Name: Fairness
  • AI Objective Description: Ensure that the AI system is developed and deployed in a manner that avoids bias and discrimination, particularly in automated decision-making affecting employment opportunities. This objective will guide data selection, model training, and validation processes to uphold fairness standards.

A.6.1.3 Processes for Design and Development of AI Systems

The organization should define and document specific processes to guide the responsible design and development of AI systems. These processes should ensure ethical, safe, and effective deployment aligned with the organization’s objectives and values.

Processes for Design and Development of AI Systems

Establish and document processes for the design and development of AI systems, ensuring consideration of critical aspects such as testing, human oversight, data requirements, and lifecycle management. Use the descriptions below to adapt each process to your organization’s context and specific needs.

Guidance for Each Development Process Item:

  1. Life Cycle Stages:

    • Description: Define the AI system’s life cycle stages, such as planning, design, testing, deployment, monitoring, and decommissioning. Specify each stage's purpose and requirements.
    • Example: Our life cycle stages include planning, data acquisition, model development, deployment, and monitoring, with tailored activities for each to support robust AI system management.
  2. Testing Requirements:

    • Description: Outline planned testing methods and protocols to verify the AI system’s functionality, accuracy, and compliance with standards. Include unit, integration, and validation testing as applicable.
    • Example: Regular performance testing and bias detection are required at each stage of model development to ensure compliance with fairness and accuracy standards.
  3. Human Oversight Requirements:

    • Description: Define the role of human oversight, especially in areas where the AI system impacts individuals. Specify tools, processes, and responsibilities for monitoring and intervening as needed.
    • Example: Human reviewers will monitor AI decisions affecting personal data, with intervention protocols for cases of detected bias or error.
  4. AI System Impact Assessments:

    • Description: Identify when and how impact assessments will be conducted to evaluate the system’s effects on individuals and society. Specify which stages require assessments.
    • Example: Impact assessments will be conducted at design, deployment, and monitoring stages to ensure alignment with ethical guidelines and societal values.
  5. Training Data Expectations:

    • Description: Describe data requirements, such as the types of data allowed, approved sources, and labeling standards, to ensure high-quality and ethically sourced training data.
    • Example: Only data from verified sources will be used, with clear labeling for transparency and consistency across training and validation datasets.
  6. Expertise:

    • Description: Detail the necessary expertise and training for developers and team members working on AI systems. Specify required knowledge areas, certifications, or training programs.
    • Example: AI developers must have knowledge in ethical AI principles, data privacy, and fairness evaluation techniques, with periodic training on emerging standards.
  7. Release Criteria:

    • Description: Define criteria for system release, including necessary approvals and sign-offs. Ensure all stakeholders understand the conditions for moving the AI system to production.
    • Example: Release criteria include accuracy benchmarks, user testing, and executive sign-off to confirm readiness for deployment.
  8. Change Control:

    • Description: Establish change control protocols to manage modifications to the AI system, ensuring usability, controllability, and risk management.
    • Example: Change requests must be documented, reviewed, and approved by the change control board to maintain system stability and security.
  9. Engagement of Interested Parties:

    • Description: Specify how stakeholders, including users and affected individuals, will be engaged throughout the development and deployment of the AI system. Outline communication and feedback mechanisms.
    • Example: Regular meetings and feedback sessions will be held with users and stakeholders to address concerns and incorporate insights into system updates.

How to Fulfill in Validaitor Platform:

  • List Response: For each of the default items, adapt the description field to reflect how each process aligns with your organization’s specific context, goals, and ethical standards. Use the examples above as a guide to ensure that the processes are clearly defined and actionable. You can add additional items with the "Add" button at the bottom as needed, but all default items must be completed.

A.6.2.2 Define Purpose and Requirements for AI System Development

The organization should define and document the purpose for the development of the AI system and outline specific requirements and specifications to guide its life cycle. This ensures clarity on the rationale and objectives driving the AI system’s development and helps structure its design, implementation, and deployment phases.

Define Purpose for Development of AI System

Define the purpose for developing the AI system, documenting the specific goals and motivations for its creation.

Purpose for Development of AI System:

In this section, specify the reasons for developing the AI system. Consider:

  • Business Case or Customer Demand: State if the AI system development is driven by a business need, customer request, or operational goal.
  • Policy or Compliance Requirements: Mention if the purpose is to comply with regulatory standards or government policies.
  • Value Addition: Describe how the AI system will add value, improve efficiency, enhance decision-making, or support strategic goals.

How to Fulfill in Validaitor Platform:

  • Free Text Response: Enter a clear and concise purpose statement for each AI system, addressing the motivation, value addition, and alignment with strategic goals.

Example: For an AI system aimed at customer service automation:

  • Goal: The AI system is being developed to enhance customer support efficiency by automating responses to common inquiries, reducing response times, and improving customer satisfaction. This initiative aligns with the organization’s goal of delivering high-quality customer service while minimizing operational costs.
Define AI System Requirements and Specifications

Define and document the requirements and specifications for the AI system across each life cycle stage, ensuring alignment with its intended purpose.

Guidance for Each AI Lifecycle Stage Specification:

  1. Plan and Design:

    • Specification: Describe initial design considerations, including ethical concerns, fairness, and data privacy. Specify the system’s core functionality and constraints.
    • Example: The AI system should incorporate privacy by design, ensuring data encryption and anonymization measures are in place from the outset.
  2. Collect and Process Data:

    • Specification: Outline requirements for data collection, including approved sources, data quality standards, and processing steps.
    • Example: Data should be sourced from verified suppliers, and quality checks should ensure that data is free from biases affecting model performance.
  3. Build and Use Model:

    • Specification: Detail model architecture, training protocols, and expected performance benchmarks.
    • Example: The model should be built using supervised learning, with accuracy benchmarks set at 95% for customer query classification.
  4. Verify and Validate:

    • Specification: Define testing requirements to validate model accuracy, robustness, and compliance with ethical standards.
    • Example: Conduct fairness testing to identify potential biases in decision-making, ensuring equitable treatment across demographics.
  5. Deploy and Use:

    • Specification: Specify deployment requirements, including necessary infrastructure, integration steps, and access controls.
    • Example: The model will be deployed on a secure cloud platform with access restricted to authorized personnel only.
  6. Operate and Monitor:

    • Specification: Outline monitoring procedures to track performance, user feedback, and update needs. Specify triggers for retraining and improvements.
    • Example: Continuous monitoring should include logging system outputs and user feedback, with retraining scheduled quarterly to address any drift in accuracy.

How to Fulfill in Validaitor Platform:

  • Free Text Response: For each lifecycle stage, document specific requirements and standards tailored to that stage. Use the examples provided above as a guide to ensure comprehensive coverage of requirements across the AI system’s life cycle.

A.6.2.3 Document AI System Design

The organization should document the AI system's design and development in detail, aligning with organizational objectives, requirements, and specification criteria. This documentation provides a comprehensive overview of design choices, architecture, and considerations across the AI system’s life cycle.

Document AI System Design

Create and upload a detailed design document for the AI system, ensuring it addresses essential components and decisions made throughout the design and development stages.

Guidance for Documenting AI System Design:

The design document should provide a structured overview of the AI system, covering the following key aspects:

  1. Machine Learning Approach:

    • Describe the type of learning used, such as supervised, unsupervised, reinforcement learning, or another method. Explain why this approach was selected based on the system’s purpose and requirements.
    • Example: The AI system uses supervised learning to classify customer support queries, ensuring high accuracy in automated responses.
  2. Learning Algorithm and Model Type:

    • Specify the learning algorithm (e.g., decision trees, neural networks) and model type utilized. Justify the choice based on the system's objectives.
    • Example: A convolutional neural network (CNN) was chosen for image recognition tasks due to its ability to identify spatial hierarchies in visual data.
  3. Training Method and Data Quality:

    • Outline the data quality standards and processes for training, including data sources, pre-processing methods, and quality checks.
    • Example: Data quality is ensured by pre-processing steps to remove noise, standardize formats, and balance the dataset across demographics to minimize biases.
  4. Model Evaluation and Refinement:

    • Document the evaluation metrics, testing protocols, and iterative refinement processes used to improve model accuracy and robustness.
    • Example: Model performance is evaluated using F1 score and ROC-AUC metrics, with iterative refinements based on testing outcomes.
  5. Hardware and Software Components:

    • List and describe the hardware (e.g., GPUs, CPUs) and software (e.g., libraries, frameworks) required for model training, deployment, and operation.
    • Example: Training is conducted on NVIDIA GPUs with TensorFlow as the primary framework, supporting large-scale data processing needs.
  6. Security Threats and Mitigations:

    • Identify potential security threats specific to AI, such as data poisoning, model theft, or adversarial attacks, and document the measures taken to mitigate these risks.
    • Example: Data poisoning is mitigated by continuous data validation, while model inversion attacks are countered by limiting model access to authorized users only.
  7. Interface and Output Presentation:

    • Describe how the system’s outputs are presented, including the user interface, visualization elements, and interpretability features to enhance user understanding.
    • Example: The AI system provides a dashboard with visualizations and confidence scores to help users interpret classification results accurately.
  8. Human Interaction and Oversight:

    • Document how human users can interact with the system, including any override or feedback mechanisms for quality control.
    • Example: Human reviewers can validate and override AI-generated classifications in customer support scenarios to ensure appropriate responses.
  9. Interoperability and Portability Considerations:

    • Specify how the AI system can integrate with other systems and whether it supports portability across different platforms or environments.
    • Example: The system is designed for cloud-based deployment but supports containerization via Docker to ensure compatibility across multiple environments.
  10. System Architecture:

    • Include a comprehensive architecture diagram showing all components, data flow, and dependencies in the system. This final architecture should represent the AI system’s complete structure and functionality.
    • Example: A diagram illustrating data inputs, processing layers, model integration, user interface, and feedback loops.

How to Fulfill in Validaitor Platform:

  • Upload Document: Create a structured ai system design document covering each of the above components, ensuring it reflects the design decisions and architecture of the AI system comprehensively. This document should be uploaded in the platform to fulfill the this requirement.

A.6.2.4 Define and Run Verification and Validation Measures

The organization should define and document verification and validation measures for the AI system, specifying criteria for their use. This ensures that the AI system meets performance, safety, and ethical standards and aligns with responsible AI objectives.

Define Verification and Validation Measures

Define and document the verification and validation measures for the AI system. Specify the test suites, criteria, and metrics to assess the system’s performance, reliability, and impacts.

How to Fulfill in Validaitor Platform:

  • Link Test Suites: For each AI model within the AI system, link one or more test suites from the available options (e.g., Fairness Benchmarks, Security Benchmarks, Custom Benchmarks). If custom test suites are needed, create them in the Test Suites section before linking them with an AI model.

  • Example: For a facial recognition AI model, complete test suites related to privacy, accuracy, and bias detection, and document the outcomes.

Run Verification and Validation Measures

Run the linked test suites to verify and validate the AI system against the defined criteria. Ensure that all tests complete successfully.

How to Fulfill in Validaitor Platform:

  • Run Test Suites: Run each defined test suite from the previous requirement. By clicking the "Add Configurations" button, users can specify the number of prompts to use and then execute the test suite. Once all test suite runs are marked as completed, the requirement will be fulfilled.

A.6.2.5 Document Deployment Plan

The organization should document a comprehensive deployment plan to ensure that all necessary requirements are met prior to deploying the AI system. This plan should address deployment environment, release criteria, stakeholder impacts, and post-deployment monitoring.

Document Deployment Plan

Create and upload a deployment plan that outlines essential steps, checks, and approvals required before the AI system goes live.

Guidance for Documenting the Deployment Plan:

  1. Deployment Environment and Component Strategy: Specify the environment where the AI system will be deployed (e.g., on-premises, cloud) and document how various components (e.g., model, interface) will be integrated. Include any requirements for compatibility and security configurations.

  2. Release Criteria and Requirements: Define the criteria that must be met before deployment, such as performance benchmarks, security checks, and validation results. This ensures the system meets quality and compliance standards. Include key verification and validation outcomes if available.

  3. User Testing, Feedback, and Approvals: Summarize any user testing performed, including pilot runs, and document relevant feedback. Note any approvals required from stakeholders, such as project managers or compliance officers, confirming system readiness. (e.g., Pilot tests showed a 95% satisfaction rate, approved by Head of AI and CISO)

  4. Stakeholder Impact and Rollout Strategy: Consider potential impacts on users and stakeholders, and outline a phased or full-scale rollout strategy. This should include any training or support needed for a smooth transition. Include fallback plans if issues arise. (e.g., Limited initial rollout for customer service team with support for training and feedback collection)

  5. Post-Deployment Monitoring: Outline a plan for monitoring system performance after deployment, specifying frequency and methods for tracking system stability and user satisfaction. (e.g., Daily performance checks for the first month, transitioning to monthly reviews)

How to Fulfill in Validaitor Platform:

  • Upload Document: Prepare a deployment plan that addresses each of these areas in a clear, organized format. Upload the document in the Validaitor platform to fulfill the requirement.

A.6.2.6 Document Operation and Monitoring Plans

The organization should document the necessary elements for the ongoing operation and monitoring of the AI system. This ensures effective maintenance, performance tracking, and support to address issues and sustain reliable system function over time.

Document Operation Plan

Create and upload an operation plan that outlines the key elements required for the AI system’s smooth and reliable operation.

Guidance for Documenting the Operation Plan:

  1. System Maintenance and Repairs: Outline processes for identifying and addressing system errors, failures, and general maintenance needs. Include procedures for regular maintenance checks and handling unexpected issues.

  2. Update Procedures: Specify how updates to the system will be managed, including criteria for necessary updates, scheduling, and communication with users regarding update content.

  3. Support and Incident Management: Detail support processes, including user access to support, reporting channels for issues, and response times. Define service levels and metrics for effective issue resolution.

  4. Operational Changes: Describe how any changes to the system’s functionality or intended use will be managed and communicated to users, ensuring transparency and compliance.

Example: The operation plan includes a bi-monthly maintenance schedule, a ticketing system for reporting incidents, and regular updates to address identified issues and maintain compliance.

Document Monitoring Plan
  • Upload or Create Document: Create and upload a monitoring plan that defines how the AI system’s performance and functionality will be continuously assessed to ensure reliable operation.

Guidance for Documenting the Monitoring Plan:

  1. System and Performance Monitoring: Define metrics for monitoring system performance, such as success rates, error detection, and compliance with technical performance criteria. Include thresholds or alerts for when retraining or adjustments are needed.

  2. Concept and Data Drift Detection: Specify monitoring procedures for detecting changes in system performance due to concept or data drift, even if the system does not use continuous learning. Detail any retraining requirements triggered by performance changes.

  3. AI-Specific Security Threat Monitoring: Outline measures to detect and respond to AI-specific security threats, including data poisoning, model theft, or inversion attacks, to maintain data integrity and system security.

  4. Compliance with User and Legal Requirements: Ensure the monitoring plan includes compliance checks with customer expectations and legal obligations, to avoid non-compliance in operational data environments.

Example: The monitoring plan includes daily automated checks for system errors, monthly assessments for concept drift, and continuous security threat detection with alerts for potential AI-specific vulnerabilities.

How to Fulfill in Validaitor Platform:

  • Upload Document: Prepare an operation plan and a monitoring plan covering each of these areas, ensuring that the documents provide a structured, actionable approach to maintaining and monitoring the AI system. Upload these documents in the Validaitor platform to meet the requirements for Document Operation Plan and Document Monitoring Plan.

A.6.2.7 AI System Technical Documentation

The organization should determine and provide the necessary technical documentation for the AI system, tailored to the needs of relevant interested parties, such as users, partners, and supervisory authorities. This ensures that stakeholders have access to appropriate, comprehensible, and actionable information about the AI system.

Verify Technical Documentation

Ensure that all required technical documentation is available and properly provided to relevant stakeholders.

How to Fulfill this Requirement:

  • Automatically Fulfilled: This requirement is automatically fulfilled by completing the documentation requirements in the following sections:

  • 6.2.2 Define Purpose for Development of AI System

  • 6.2.3 Document AI System Design
  • 6.2.4 Define and Run Verification and Validation Measures
  • 6.2.6 Document Operation and Monitoring Plans

No additional action is needed if these sections are completed. The platform will verify that the necessary technical documentation is in place based on the documentation provided in these related controls.

A.6.2.8 Define Event Logging Requirements

The organization should determine at which phases of the AI system life cycle event logging should be enabled. At a minimum, event logging must be active when the AI system is in use to ensure traceability, monitor performance, and detect undesirable outcomes.

Define Event Logging Requirements

Specify the AI system lifecycle stages where event logging is necessary. Document the storage location and logging method for each stage to ensure proper traceability and performance monitoring.

Recording of Events Process:

  1. AI System Lifecycle Stage:

    • Select the phase of the AI system life cycle for which event logging is being configured. At a minimum, event logging should be enabled for the "Operate and Monitor" stage to capture logs when the system is in actual use.
  2. Location:

    • Provide the storage location for the event logs. This could be a directory path, a database, or a cloud storage endpoint where logs are securely stored.
    • Examples:
      • Local storage: "/var/logs/AI_System/"
      • Cloud storage: "s3://company-logs/AI_System/"
      • Database: "DatabaseTable: AI_EventLogs"
  3. Method:

    • Describe the logging method or protocol used to capture and record event data. This could involve direct database entry, API endpoint submissions, or using specific logging tools or services.
    • Examples:
      • "Direct to Database": Logs are stored directly in a specified database table.
      • "HTTP POST to /log-endpoint": Logs are sent via HTTP POST requests to a designated endpoint.
      • "Integrated with Cloud Logging Service": Uses a cloud service such as AWS CloudWatch or Azure Monitor for log storage and management.

How to Fulfill in Validaitor Platform:

  • Select Lifecycle Stage: Choose the appropriate lifecycle stage from the dropdown where logging should be enabled.
  • Enter Location: Specify where the logs will be stored for the selected lifecycle stage.
  • Enter Method: Define the method or protocol used to record and store logs.

Data for AI Systems

To ensure that the organization understands the role and impacts of data in AI systems in the application and development, provision or use of AI systems throughout their life cycles.

A.7.2 Define Data Management Processes

The organization should define, document, and implement data management processes related to the development of AI systems. This ensures that data used in AI development is handled responsibly, addressing privacy, security, transparency, and data quality.

Define Data Management Processes

Create and upload a comprehensive document that outlines data management processes specific to AI development.

Guidance for Documenting Data Management Processes:

The document should cover the following areas to ensure a robust data management framework for AI development:

  1. Privacy and Security Implications: Describe how sensitive or personal data is handled to ensure privacy and security, including any encryption, access controls, or anonymization techniques used to protect data (e.g. Personal data used in model training is anonymized and stored in an encrypted database with restricted access).

  2. Data-Related Security and Safety Threats: Outline potential threats arising from data use in AI, such as data poisoning attacks or model inversion, and specify safeguards to mitigate these risks (e.g. Regular audits are conducted to detect any tampered data, and models are trained with robust validation to prevent data poisoning).

  3. Transparency and Explainability: Document processes to ensure data provenance (source tracking) and explain how data influences AI outputs, especially for systems requiring transparency and explainability (e.g. Each dataset is labeled with metadata indicating its source and usage purpose, ensuring traceability throughout the development process).

  4. Representativeness of Training Data: Specify how the organization ensures that training data represents the intended operational domain, minimizing biases and improving generalization in real-world applications (e.g. The training dataset is periodically reviewed to confirm its representativeness of diverse user demographics to reduce potential biases in the model’s outputs).

  5. Data Accuracy and Integrity: Describe methods used to ensure the accuracy and integrity of data throughout the development process, including validation steps, error-checking, and data cleaning protocols (e.g. Data accuracy is ensured through automated validation scripts, which flag inconsistencies or inaccuracies for further review).

How to Fulfill in Validaitor Platform:

  • Upload Document: Prepare a data management process document that addresses each of these key areas. Ensure the document provides a structured, detailed framework for handling data responsibly in the development of AI systems. Upload this document in the Validaitor platform to meet the requirement.

A.7.3 Document Data Acquisition

The organization should determine and document details about the acquisition and selection of data used in AI systems. This documentation ensures that data sources are appropriate, compliant, and well-documented for responsible AI development.

Document Data Acquisition

Complete the dataset details by providing relevant information in each field. You can create a new dataset or select from existing datasets.

Guidance for Filling Out Each Field:

  1. Name: Provide a descriptive name for the dataset to easily identify it within the AI system (e.g., "Customer Purchase Data 2023").

  2. Category: Select the data category that best represents the dataset.

  3. Source: Specify the origin of the data. This helps clarify data ownership and accessibility.

  4. Characteristics: Describe the characteristics of the dataset. This helps define the nature of the data for usage considerations (e.g., "streamed real-time sensor data").

  5. Demographics & Biases: Describe the demographics of data subjects represented in the dataset and note any known or potential biases. This can include age, gender, location, or other demographic indicators relevant to the dataset (e.g., "contains data primarily from users aged 18-35 in urban areas").

  6. Data Rights: Specify any data rights or restrictions, such as personally identifiable information (PII) or copyright considerations, to ensure compliance with legal and ethical standards (e.g., "contains PII, restricted under GDPR").

  7. Prior Handling: Describe any previous uses of the data and any conformity checks with privacy or security requirements that have been applied. This helps in understanding any past data processing (e.g., "data was previously anonymized and validated for security compliance").

  8. Metadata: Provide additional details about data labeling, tagging, or any processes used to enhance the data quality (e.g., "labeled for sentiment analysis with accuracy checks in place").

  9. Provenance: Document the origin or lineage of the data, explaining where it was sourced or how it was generated. This information is essential for traceability and data validation (e.g., "data collected from user surveys conducted in Q1 2023").

How to Fulfill in Validaitor Platform:

  • Add a New Dataset: Click Add Dataset will navigate to the Data Assets tab. Fill in each field with the relevant information as outlined above.
  • Select Existing Dataset: If the required dataset is already documented, click Select Dataset to add it to the AI system without re-entering details.

A.7.4 Define Data Quality Requirements and AI System Data Quality Assessment

The organization should establish and document clear data quality requirements to ensure that data used to develop and operate the AI system is appropriate, accurate, and reliable. This process includes defining quality standards and assessing datasets to meet these standards.

Define Data Quality Requirements

Create a list of data quality requirements that your organization considers essential for the AI system's performance. You can add new requirements or use predefined ones by adjusting their descriptions to suit your organization's context.

Guidance for Filling Out Each Field:

  1. Name: Choose a name that clearly indicates the data quality requirement, such as "Data Accuracy," "Data Security," or "Data Integrity and Accuracy."

  2. Description: Provide a tailored description explaining why this requirement is essential and what it entails. Modify default descriptions to align with your organization's standards and objectives.

  3. Examples:
    • Data Accuracy: Ensure that the data used in AI systems is correct, consistent, and free of errors. This requirement helps maintain the reliability of AI outputs.
    • Data Security: Protect data from unauthorized access, maintaining confidentiality and integrity to safeguard sensitive information.
    • Data Integrity and Accuracy: Ensure that the data used in AI systems is accurate, complete, and reliable for consistent results in production.

Once all necessary requirements are defined, save this information to guide data handling and quality checks across the AI system lifecycle.

How to Fulfill in Validaitor Platform:

  • List Response: Create or select data quality criterias. Customize the descriptions to reflect your organization’s standards.
AI System Data Quality Assessment

Assess each dataset associated with the AI system against the defined data quality requirements to confirm it meets the necessary standards. This helps maintain the validity and fairness of AI outputs.

Guidance for Filling Out Each Field:

  1. Explanation: In this field, provide a detailed explanation of how the dataset meets each data quality requirement. Describe any specific checks, metrics, or validation methods used to ensure compliance.
  2. Examples:
    • Data Accuracy: Describe methods used to verify data accuracy, such as automated error-checking scripts or manual review processes. (e.g., "Data entries are cross-validated with source documents to prevent inaccuracies.")
    • Data Security: Explain how the dataset is secured to prevent unauthorized access, including any encryption, access restrictions, or monitoring tools in place. (e.g., "Dataset is stored in an encrypted database with restricted access to authorized personnel only.")
    • Data Integrity and Accuracy: Provide details on measures to maintain data integrity, such as regular integrity checks or use of data validation tools. (e.g., "Dataset undergoes weekly integrity checks to ensure completeness and consistency.")

How to Fulfill in Validaitor Platform:

For each dataset associated with an AI system, document the methods and checks used to verify compliance with each quality requirement in the Explanation fields.

A.7.5 Establish Data Provenance Process

The organization should define and document a process for recording the provenance of data used in its AI systems throughout the data and AI system life cycles. This process helps ensure traceability and accountability of data handling.

Establish Data Provenance Process

Create and upload a document outlining the organization’s approach to data provenance for AI systems. Ensure the following essential elements are covered:

  • Data Creation and Source: Document the origin of the data, including initial validation steps taken to confirm authenticity (e.g., "Data gathered from internal customer survey, verified by cross-referencing with purchase history").

  • Updates and Modifications: Describe how changes to the data are recorded, such as updates, corrections, and transformations, to maintain a clear record of alterations over time (e.g., "All data transformations are logged with timestamps and modification descriptions").

  • Control and Sharing: Outline procedures for tracking transfer of control (ownership) or instances of data sharing without transfer of control, ensuring responsible data handling (e.g., "Data shared with external partners under strict confidentiality agreements, with access logs").

  • Verification and Validation Measures: Summarize any additional verification steps to confirm the reliability and integrity of the data source, particularly if data is acquired externally.

How to Fulfill in Validaitor Platform

  • Upload Document: Prepare a concise document covering these points and upload it in the Validaitor platform to meet the requirement.

A.7.6 Data Preparation

The organization shall define and document its criteria for selecting data preparations and the data preparation methods to be used.

Define Data Preparation Criteria

Establish clear criteria for selecting and implementing data preparation methods to ensure that the data meets the quality and structure requirements needed for effective AI training and deployment. This involves specifying the standards or guidelines that will inform which data preparation techniques are applied to make the data suitable and reliable for the AI system’s intended purpose.

Documenting these criteria helps ensure consistency, traceability, and alignment with the organization’s objectives and regulatory standards.

Common Data Preparation Criterias

  1. Data Labelling Data labelling is the process of assigning meaningful labels to data instances to facilitate supervised learning.

    • Example: Labeling images in a dataset as "cat," "dog," or "bird" to train an image classification model.
  2. Data Cleaning Data cleaning is the process of detecting and correcting (or removing) corrupt or inaccurate records from a dataset.

    • Example: Removing rows with missing values or correcting spelling errors in text data.
  3. Data Normalization Data normalization is the process of organizing data to ensure consistency and efficiency, often by scaling values within a particular range.

    • Example: Scaling all numerical features in a dataset to a range of 0 to 1.
  4. Data Encoding Data encoding is the process of converting data from one form to another, making it suitable for machine learning algorithms.

    • Example: Converting categorical variables like "Male" and "Female" to binary values 0 and 1.
  5. Data Annotation Data annotation is the process of labeling data to make it understandable for machines, particularly for tasks in supervised learning.

    • Example: Annotating faces in images to train a facial recognition model.
  6. Data De-identification Data de-identification is the process of removing or modifying personal information from datasets to protect privacy.

    • Example: Removing names and social security numbers from a medical dataset.
  7. Data Composition Data composition involves combining data from multiple sources into a single dataset for analysis.

    • Example: Merging customer information from multiple departments within an organization into one consolidated dataset.
  8. Data Standardization Data standardization ensures that data is consistent and uniform in format across datasets.

    • Example: Converting all date formats in a dataset to "YYYY-MM-DD" for consistency.
  9. Data Imputation Data imputation is the process of replacing missing data with substituted values to maintain data integrity.

    • Example: Filling missing values in a temperature dataset with the mean temperature of the surrounding data points.

Guidelines to Write the Descriptions of Criteria

For each criterion, provide a description that reflects your organization’s specific data preparation needs. This description should outline why the criterion is relevant to your AI system and how it will support data quality, consistency, or usability. The descriptions should be tailored to align with the goals of your organization’s AI systems, considering factors such as data sensitivity, the nature of your AI tasks, and regulatory requirements.

Here’s what to consider when writing descriptions for each Data Preparation Criterion:

  • Purpose of the Criterion: Describe why this particular criterion is essential for your AI system. For example, if using sensitive data, "Data De-identification" might be crucial to ensure compliance with privacy regulations.

  • Intended Outcome: Explain what you aim to achieve with each data preparation step. For example, for "Data Labelling," specify that you want accurately categorized data to support supervised learning.

  • Implementation Guidelines: Briefly mention any standards or best practices that should be followed for each criterion. For instance, for "Data Cleaning," you might include steps for removing duplicates, handling outliers, and fixing inconsistencies.

How to Fulfill in Validaitor Platform

Select a predefined criterion or add a new one from the Data Preparation Criteria dropdown. For each criterion, provide a description that reflects your organization’s needs. Modify the default descriptions to align with your specific data preparation goals.

Define Data Preparation Method

Specify the data preparation methods to be used for each criterion defined previously. For each AI system, select or type the required methods to ensure data is properly prepared according to the standards set by your organization. This process helps improve data quality, maintain consistency, and ensure the data is ready for AI model development.

Data Preparation Methods and Examples:

  • Statistical Exploration: Analyzing data distribution to understand its structure and identify patterns. Common measures include mean, median, standard deviation, and range, which help detect outliers or inconsistencies. Example: Exploring the spread of numerical data to see if any values are unusually high or low (e.g., identifying salary outliers in employee data).

  • Cleaning: Detecting and correcting or removing inaccuracies or inconsistencies in data, ensuring it’s reliable for analysis. Cleaning can involve removing duplicates, fixing typos, or handling missing values. Example: Removing duplicate entries in a customer database or correcting misspelled product names (e.g., changing "iphne" to "iPhone").

  • Imputation: Filling in missing data values using estimated values to maintain dataset completeness. Common imputation methods include using the mean, median, or mode for numerical values or the most frequent value for categorical variables. Example: Replacing missing age values in a dataset with the median age (e.g., filling empty cells in an age column with 35).

  • Normalization: Adjusting data values to fit within a specified range, usually between 0 and 1, making features comparable and reducing bias. Example: Scaling age values to a 0-1 range to ensure compatibility with other scaled features (e.g., converting an age of 25 to 0.3 in a scaled range).

  • Scaling: Adjusting the scale of data, particularly for features with large numerical ranges, to prevent any one feature from dominating the model. Example: Standardizing the weights of products in grams so that all values are on a similar scale (e.g., converting weights from kg to a scale of 0 to 100).

  • Labelling: Assigning tags or identifiers to data instances, particularly useful in supervised learning where models learn from labeled data. Example: Labeling images of animals with identifiers like "dog," "cat," or "bird" for a classification model.

  • Encoding: Converting categorical variables into numerical representations that AI algorithms can process. Methods include one-hot encoding and label encoding. Example: Transforming "Yes" and "No" responses in a survey into 1 and 0, respectively, for easier model input (e.g., "Yes" = 1, "No" = 0).

How to Fulfill in Validaitor Platform

For each AI System, add methods under its relevant criterion by selecting or typing a method, then clicking "Save" to confirm.

Information for Interested Parties

To ensure that relevant interested parties have the necessary information to understand and assess the risks and their impacts (both positive and negative).

A.8.2 System Documentation and Information for Users

The organization shall determine and provide the necessary information to users of the AI system.

Provide User Information

Determine and provide the necessary information to users of the AI system through various methods, such as documented instructions, alerts, notifications, and web pages.

It is essential that this information is clear, complete, up-to-date, and accessible to ensure users can effectively interact with the system and understand its capabilities, limitations, and potential impact on them.

Why This Information is Needed: User information empowers individuals to use the AI system effectively, responsibly, and safely. By informing users about the AI system’s purpose, technical requirements, performance, and risks, organizations help mitigate potential misunderstandings and reduce the risk of unintended consequences. This transparency is especially important when AI systems could impact decisions or actions that directly affect users or other stakeholders.

User Information to Fill:

  1. Purpose of the System Describe the primary goals and intended outcomes of the AI system. This should include an explanation of how the AI system serves users, stakeholders, or organizational objectives. Example: "The AI system is designed to automate data analysis and generate insights to help business analysts make data-driven decisions. Users were informed via the company’s Knowledge Base and internal emails for more details."

  2. User Interaction Notice Inform users when they are directly interacting with the AI system and specify the system’s role in the interaction. Example: "Users will see a notification labeled 'Powered by AI' on each page where the AI system is actively generating recommendations or automating responses. This information is displayed directly in the user interface."

  3. Override Information Provide instructions on how users can override, adjust, or discontinue AI outputs when needed. Example: "If users disagree with the AI-generated recommendation, they can manually adjust the decision in the dashboard. Instructions on overriding AI outputs are available in the user manual and FAQ section."

  4. Technical Requirements List the technical resources and requirements needed to operate the AI system effectively. Example: "The AI system requires a stable internet connection and a minimum of 8GB RAM to perform optimally. Technical specifications and compatibility information are provided on the system setup page."

  5. Human Oversight Outline the roles and responsibilities related to human oversight of the AI system, especially in scenarios where human intervention may be necessary. Example: "A human reviewer monitors all high-priority decisions generated by the AI system. If any result exceeds predefined risk thresholds, the system prompts a human intervention before proceeding."

  6. Accuracy and Performance Describe the expected accuracy and performance metrics of the AI system, including any known limitations. Example: "The AI system has an accuracy rate of 95% for standard operations, but may have reduced performance in cases with incomplete data. Detailed performance metrics and limitations are documented in the system's performance report."

  7. Impact Assessment Inform users of any assessed impacts of the AI system, including potential risks or benefits. Users benefit from understanding how the system might affect them, especially if the system is used in a sensitive or high-stakes context. Example: "The AI system was assessed for potential impacts on data privacy and fairness in automated decision-making. It has minimal risk of bias under standard operating conditions. Impact assessments are published quarterly on the company intranet."

  8. Revisions to Benefits Communicate any updates or changes to the AI system's anticipated benefits. Example: "Following system updates, the AI is now able to analyze larger datasets and provide more accurate predictions, enhancing its decision support capabilities. Users were informed via an update email and the change log on the system homepage."

  9. Updates and Maintenance Share details about regular maintenance and system updates, including frequency and purpose. Example: "System maintenance occurs monthly, and new updates are installed automatically to improve performance. Maintenance schedules and update logs are accessible through the support portal."

  10. Contact Information Provide accessible contact details for users to reach support, report issues, or give feedback. Example: "For support, users can contact the AI helpdesk via support@company.com or call the support line at +1 (800) 123-4567. Support hours and additional contact methods are listed on the support page."

  11. Educational Materials Offer supplementary resources or materials to help users fully understand and effectively use the AI system. Example: "Users can access tutorials, best practices, and FAQs on the AI system in the Learning Center. Additional workshops and training sessions are scheduled quarterly; users will be notified via email with registration details."

How to Fulfill in Validaitor Platform For each AI system, users are required to complete the default information description fields to ensure compliance. This can be done by typing the necessary information into the text box under each information and then clicking the save icon to save their entry. Additional information may be added if needed by selecting the "Add" button located at the bottom of the form.

A.8.4 Upload Incident Communication Plan

To ensure transparency and maintain trust, organizations are required to establish a structured plan for notifying users about incidents related to the AI system. This plan should outline the specific types of incidents, notification protocols, and details required to be communicated, ensuring compliance with legal and regulatory obligations.

Upload Incident Communication Plan

Users can fulfill this requirement by uploading a document that provides a comprehensive outline of the incident communication plan for the AI system. This document should cover the following elements:

  1. Types of Incidents to be Communicated Define which types of incidents require notification to users. This may include incidents specific to AI functionality, information security breaches, data privacy violations (e.g., data breaches involving PII in training data), or any event impacting the reliability or safety of the AI system.

  2. Timeline for Notification Specify the timeline for notifying users after an incident occurs. This section should address promptness requirements, including any legal or contractual deadlines for notification.

  3. Authority Notifications Outline any obligations to inform regulatory authorities, based on the nature of the incident, jurisdiction, or other external requirements. This can include identifying the relevant authorities and detailing when and how they must be notified.

  4. Details Required in Communication Document the information that must be included in incident notifications to users, such as:

    • Nature and cause of the incident
    • Potential impacts on users
    • Steps being taken to mitigate the issue
    • Instructions or recommendations for users, if applicable
  5. Integration with Broader Incident Management Describe how this AI-specific incident communication plan aligns with the organization's general incident management policies. Highlight any unique considerations for AI, such as data integrity issues in machine learning models or privacy breaches in datasets.

How to Fulfill in Validaitor Platform

  • Upload Document: Prepare a concise document covering these points and upload it in the Validaitor platform to meet the requirement.

A.8.5 Information for Interested Parties

The organization must determine and document its obligations for reporting information about the AI system to relevant interested parties. This includes identifying what type of information is necessary to report, the parties entitled to receive this information, and the timeframe for communication.

Define Reporting Obligations

To fulfill this requirement, the organization should consider the following categories for reporting obligations, as specified by ISO guidelines:

Interested Party Information Details

  1. Information Type This field specifies the kind of information that must be communicated to interested parties. The default options include:

    • System Performance: Information regarding the AI system’s overall performance and efficiency. Example: Organizations might select this if they are obligated to report performance metrics to regulatory bodies.
    • Data Usage: Details on how data is utilized within the AI system. Example: This may be essential for stakeholders interested in data handling practices, such as internal audits.
    • Ethical Concerns: Information on the ethical considerations surrounding the AI system. Example: If the AI system impacts sensitive demographics, end users or external auditors might require ethical assessment reports.
    • Incident Reports: Reports on specific incidents that affect the AI system's integrity or security. Example: Regulatory bodies might require incident reports in case of breaches or system malfunctions.
  2. Eligible Interested Party This field specifies the group or individual who should receive the information. The options include:

    • Regulatory Body: For reporting requirements to governmental or regulatory authorities. Example: Organizations should select this when they need to report to compliance authorities on data privacy incidents.
    • End Users: When the information is directly relevant to those who use the system. Example: End users may need updates on system performance if it impacts their interaction.
    • Stakeholders: Internal or external parties invested in the AI system’s success or impact. Example: Stakeholders, like business partners, might require updates on ethical concerns.
    • Internal Audit: For internal compliance and review processes. Example: This option is suitable for periodic reviews by the organization’s audit team.
  3. Time Frame Specify the frequency with which the information should be reported. Example: A quarterly performance report for stakeholders might be set as "3 months."

How to Fulfill in Validaitor Platform

For each AI system, users can select the relevant fields from the dropdowns and input the desired time frames in the boxes. The dropdowns provide pre-set options for information type and eligible interested parties, while the time frame fields allow users to define reporting intervals in terms of days, months, or years.

Use of AI Systems

To ensure that the organization uses AI systems responsibly and per organizational policies.

A.9.2 Processes for Responsible Use of AI Systems

Define AI Usage Processes

The organization must establish and document processes that ensure the responsible use of AI systems. This includes defining considerations, policies, and approvals required to make decisions about using an AI system, whether developed internally or sourced from third parties.

AI Usage Process Documentation Details

To meet this requirement, the organization’s documentation should cover several key areas:

  1. Process for Determining AI System Use Outline the specific criteria or approvals needed to decide on the adoption or deployment of an AI system. This includes evaluating whether the system aligns with organizational objectives, assessing any associated costs, and ensuring the system meets legal and regulatory standards.

    Example: "AI systems will be reviewed by the compliance team to ensure alignment with data protection regulations."

  2. Monitoring and Maintenance Describe the ongoing monitoring and maintenance requirements for AI systems. This should include how the organization will track system performance, manage updates, and address issues to maintain responsible use over time.

    Example: "Monthly system audits will be conducted to verify compliance with operational standards."

  3. Integration with Existing Policies Detail how the AI system's usage processes integrate with the organization’s existing policies on systems, assets, and data handling. This ensures that AI systems adhere to established guidelines, minimizing the need to develop separate policies.

    Example: "AI system usage policies will follow the organization’s general IT asset management guidelines."

How to Fulfill in Validaitor Platform

In the Validaitor platform, users have two options to meet this requirement:

  • Upload a Document: Users can upload their own documentation that outlines the processes for responsible AI use, covering the areas mentioned above.
  • Create From Template: Alternatively, users can create a document using a template provided by the platform. This template already includes sections for the mentioned key areas. Users need only to fill in specific details relevant to their organization under each section.

A.9.3 Objectives for Responsible Use of AI System

The organization is required to define and document clear objectives to ensure the responsible use of AI systems. These objectives serve as guiding principles to promote ethical and reliable AI deployment, helping to align the AI system’s usage with the organization’s values and operational standards.

Define AI Usage Objectives

In this step, the organization identifies specific objectives that outline expectations for the responsible use of AI systems. These objectives address various aspects of AI deployment, such as fairness, accountability, and transparency, to create a framework that minimizes risks and maximizes benefits.

Define AI Usage Objective Details

Below are the common objectives and examples of how they can be applied:

  1. Fairness Ensure that the AI system assesses all users fairly and without bias. The organization prioritizes equal treatment, especially in processes impacting diverse user groups, such as hiring, loan approvals, and healthcare access. The AI system should meet standards to prevent biases related to race, gender, age, or other characteristics to support inclusivity and fairness in decision-making processes. Example: "This AI system in hiring processes evaluates candidates equitably, ensuring no bias based on gender, race, or age. Regular audits and fairness assessments are conducted to uphold these standards."

  2. Accountability Establish clear lines of responsibility for decisions made by the AI system. This objective ensures that actions taken by the AI system can be traced back to accountable individuals or teams, enabling transparency and compliance, especially in regulated sectors like finance and healthcare. Responsible parties must review outputs and ensure decisions align with organizational policies. Example: "For AI-driven loan approvals, accountability measures are in place to trace decisions to specific team members, ensuring transparency and compliance with financial regulations."

  3. Transparency Make AI operations and decision-making processes understandable to end-users and stakeholders. Providing accessible explanations of AI-driven outcomes builds trust, especially for public-facing applications like government services. Information on how the system works and how decisions are made should be available to all stakeholders. Example: "For government service delivery, this AI system provides clear, understandable information on decision-making processes to foster public trust and transparency."

  4. Explainability Provide clear and understandable explanations of AI-generated outcomes to users and stakeholders. Explainability is essential for contexts where decisions may directly impact users, such as healthcare or finance, enabling informed decision-making and promoting trust in AI insights. Example: "In healthcare, the AI system explains treatment recommendations to healthcare providers, ensuring that all outcomes can be understood and justified to assist in patient care."

  5. Reliability Ensure the AI system performs consistently and accurately in all operational contexts. This objective is particularly important for industries relying on high-performance systems, such as manufacturing and logistics. The organization sets benchmarks for consistent performance and regularly evaluates the system against these benchmarks. Example: "For defect detection in manufacturing, this AI system consistently meets accuracy standards, and regular performance checks maintain its reliability."

  6. Safety Minimize safety risks associated with the AI system’s deployment. In applications where safety is critical, such as autonomous vehicles or industrial systems, robust safety protocols and testing measures are required to protect users and the environment. Example: "This autonomous driving AI system is equipped with protocols to ensure passenger and pedestrian safety, with routine safety checks and updates based on operational data."

  7. Robustness and Redundancy Ensure that the AI system remains functional and reliable under unexpected conditions or failures. Robustness and redundancy are critical in high-stakes sectors like emergency response and cybersecurity, where system reliability must be maintained under stress. Example: "In cybersecurity, this AI system is designed to withstand attacks and maintain functionality, providing consistent protection even under high-threat conditions."

  8. Privacy and Security Protect user data and maintain confidentiality throughout AI system operations. For applications dealing with sensitive information, such as patient records in healthcare, this objective includes compliance with privacy laws and organizational security policies. Example: "For healthcare applications, this AI system adheres to data protection regulations, ensuring all patient information remains confidential and secure."

  9. Accessibility Ensure the AI system is accessible to a wide range of users, including those with disabilities. Accessibility standards are incorporated into system design, particularly for public-facing services and applications to promote inclusivity and usability. Example: "This AI-powered application is designed to be usable by individuals with visual impairments, meeting accessibility standards and enhancing user inclusivity."

How to Fulfill in the Validaitor Platform

For each AI System, users can select predefined objectives from a dropdown menu or create custom objective names and descriptions. While default names and descriptions are available, users are encouraged to tailor the descriptions to reflect their organization’s unique requirements.

In the description field, users should outline why the objective is necessary and how it applies to their AI system’s usage, including any specific criteria or standards that the organization will follow to meet the objective.

Assign a Responsible Person and Lifecycle for Each AI Usage Objective

In this requirement, the organization must assign a designated individual from within the organization to oversee each defined AI usage objective at each stage of the AI system’s lifecycle. This ensures accountability and clarity in managing the objectives, as each responsible person will oversee the implementation and monitoring of their assigned objective across different lifecycle stages.

How to Fulfill in Validaitor Platform

For each AI system and each lifecycle stage, users should:

  1. Select the AI Usage Objective: Choose the objective from the dropdown list. These objectives are the ones previously defined in the "Define AI Usage Objectives" requirement.
  2. Assign a Responsible Person: Select an individual from the organization to be responsible for this objective at the chosen lifecycle stage. This person will be accountable for ensuring that the objective is met and maintained at that specific phase.

In the platform, users can view the AI system lifecycle stages and assign a responsible person for each objective by selecting from available options. Users should ensure that a responsible person is assigned for each AI usage objective across all relevant lifecycle stages to meet the requirement comprehensively.

A.9.4 Intended Use of the AI System

This control ensures that the AI system is deployed and operated strictly according to its intended uses as documented. By doing so, the organization can maintain the system’s intended performance, prevent unintended consequences, and uphold legal and ethical obligations.

Ensure Intended AI Usage

This requirement safeguards the reliability and accuracy of the AI system by ensuring its usage remains within specified boundaries, as outlined in the documentation. It also mandates monitoring of the system’s operation, to identify any concerns related to its usage that could impact stakeholders or breach legal requirements.

How to Fulfill in Validaitor Platform

To fulfill this requirement in the Validaitor platform:

  1. Users will see indicators showing whether the previous related controls (6.2.6 for system monitoring, 8.2 for providing user information, and 9.3 for assigning responsible persons) have been completed.
  2. If these previous controls are fulfilled, users need to click the Approve button for each of them. This confirmation ensures that all foundational aspects required for responsible usage of the AI system are in place.
  3. Once the necessary approvals are completed, this requirement will be fulfilled, affirming that the AI system’s usage aligns with its intended purpose as per documented guidelines.

Third-party and Customer Relationships

To ensure that the organization understands its responsibilities and remains accountable, and risks are appropriately apportioned when third parties are involved at any stage of the AI system life cycle.

A.10.2 Allocating Responsibilities

The organization must allocate and document responsibilities within the AI system life cycle, ensuring clarity between the organization, its partners, suppliers, customers, and any involved third parties. This allocation of responsibilities helps maintain accountability and ensures that all parties are aware of their roles in relation to the AI system's development, deployment, and operation.

Allocate Responsibilities

This requirement mandates that the organization clearly defines and allocates roles and responsibilities among all parties involved in the AI system's life cycle. Each party’s role should be documented to avoid confusion and ensure compliance with organizational and regulatory standards.

Allocate Responsibilities Details

To fulfill this requirement, users need to provide the following details for each interested party involved in the AI system’s life cycle:

  1. Name of the Interested Party: Identify the party involved in the AI system, such as “Data Provider,” “System Developer,” or “Customer.”

  2. Responsibilities: Describe the specific responsibilities of the interested party.

    • Example: For a “Data Provider,” responsibilities may include ensuring data accuracy, quality, and compliance with data protection standards.
  3. Roles: Define the role of the interested party. Roles could include data providers, developers, end-users, or oversight personnel.

    • Example: A “System Developer” might be responsible for the design and implementation of the AI model.
  4. Lifecycle Stage: Specify which stage of the AI system life cycle this party is involved in.

    • Example: The “System Developer” may be involved in the “Build and Use Model” stage.

How to Fulfill in Validaitor Platform

  • Users can click the + button to add or select interested parties for each AI system and provide the necessary information.
  • To select an existing interested party, click the Select Int. Party button and choose the relevant AI system lifecycle stage from the dropdown menu.
  • Alternatively, users can add a new interested party by clicking the Add Int. Party button. For new entries:
  • Type the name and specify the responsibilities of the interested party in the provided text fields.
  • Select the appropriate role and lifecycle stage from the dropdown menus to allocate responsibilities accurately.
Processed Data with PII

This requirement focuses on identifying and managing personal identifiable information (PII) within the AI system. The organization should ensure that PII data is processed according to its role as either a PII processor, PII controller, or both, in compliance with relevant data protection standards.

Processed Data with PII Details

To fulfill this requirement, users should:

  • Verify if the AI system processes data containing PII by reviewing the data sources and system functionality.
  • Determine the organization’s role in handling PII. If the organization collects, stores, or manages PII data, it may act as a PII controller, a PII processor, or both depending on the level of control and responsibility it has over the data.
    • Example: If the organization only processes data provided by another entity, it may be considered a PII processor. However, if it also decides the purpose and means of processing PII data, it would be a PII controller.

How to Fulfill in Validaitor Platform

  • Users can select “Yes” or “No” in response to whether the AI system contains PII data.
  • If “Yes” is selected, they can choose the organization’s role (PII Processor, PII Controller, or Both) from a dropdown menu to complete this requirement.

A.10.3 Suppliers

The organization is responsible for establishing a process to ensure that any services, products, or materials provided by suppliers align with its commitment to the responsible development and use of AI systems.

This involves evaluating suppliers based on their alignment with the organization's responsible AI principles, ensuring adequate documentation, and identifying and mitigating any risks posed by the supplier’s contributions to the AI system.

Establish Supplier Alignment Process

Establishing a supplier alignment process includes documenting the details of each supplier, evaluating the associated risks, setting requirements, and planning for ongoing monitoring and evaluation.

Details

  1. Supplier Information:

    • Name: Enter the supplier’s name.
    • Description: Provide a description of the supplier’s role or expertise.
    • Supplied Material: Specify the services, products, datasets, or materials provided by the supplier.

    Example: A supplier named "DataPlus" provides a curated dataset for training a language model. Description: "Supplier specializes in high-quality data curation for natural language processing applications." Supplied Material: "Curated text data for training language models in multiple languages."

  2. Potential Risks:

    • Identify any risks associated with using the supplier's products or services, such as data quality issues, potential biases in algorithms, or insufficient security measures.
    • Example: "Risk of data bias in demographic representation, which could lead to unfair outcomes in language model predictions."
  3. Supplier Requirements:

    • Define the requirements that the supplier must meet to ensure responsible AI use, such as compliance with data protection standards, adherence to ethical guidelines, or transparency in model development processes.
    • Example: "Supplier must provide transparency on data sourcing and ensure demographic diversity in dataset composition to mitigate biases."
  4. Monitoring and Evaluation:

    • Outline the ongoing processes for monitoring the supplier’s compliance and performance, including regular evaluations, audits, or quality checks.
    • Example: "Quarterly review of supplier’s dataset updates to ensure data quality and alignment with organizational standards."

How to Fulfill in Validaitor Platform

  • Add or Select a Supplier: Users can either add a new supplier or select an existing one.
    • To add a new supplier, enter the name, description, and supplied materials in the provided textboxes and click the Add Supplier button.
  • Fill in Additional Details: After adding or selecting a supplier, users should:
    • Document potential risks in the Potential Risks textbox.
    • Specify supplier requirements in the Supplier Requirements textbox.
    • Outline the monitoring and evaluation plan in the Monitoring and Evaluation textbox.
  • Update Alignment: Click Update Align Supplier to confirm alignment with the selected AI system.
  • Remove Connection: To disconnect a supplier from an AI system, click the Remove Connection button.
  • Assign and Align for Each AI System: Each AI system requires a designated supplier with aligned responsibilities, ensuring that every supplier’s contributions adhere to the organization’s standards for responsible AI.
Upload Supplier Documentation

The organization should ensure that the supplier provides comprehensive documentation related to the AI system or any components they supply. This documentation is essential to maintain transparency, accountability, and alignment with responsible AI practices.

How to Fulfill in Validaitor Platform

  • Upload Document: Create a detailed document that includes information from controls 6.2.7 (AI System Technical Documentation) and 8.2 (System Documentation and Information for Users). Ensure the document is clear, comprehensive, and accessible. Upload this document to the Validaitor platform to fulfill the requirement.

A.10.4 Customers

The organization shall ensure that its responsible approach to the development and use of AI systems takes into account customer expectations and needs.

Consider Customer Expectations

This requirement mandates that the organization’s responsible AI practices align with the expectations and requirements of its customers. By thoroughly understanding and addressing these expectations, the organization can create AI systems that are not only effective but also trustworthy and aligned with customer values and needs.

Required Contents of the Customer Expectations Document:

  1. Customer Needs and Expectations: A detailed analysis of the specific expectations, needs, and values of the organization’s customers with respect to the AI system. This may include expectations around transparency, fairness, privacy, and performance.

    • Example: If customers expect high transparency, the document should outline how transparency will be maintained throughout the AI lifecycle and how customers will be informed.
  2. Risk Identification and Communication: A section identifying potential risks associated with customer use of the AI system. This should include risks the organization anticipates the customer may encounter, and a plan for informing the customer about these risks.

    • Example: For an AI system used in sensitive industries, such as healthcare, the document could describe known risks of false positives and the protocols for communicating these risks to customers.
  3. Roles and Responsibilities: A clear delineation of roles and responsibilities between the organization and the customer in managing and mitigating risks. This should cover what the organization will handle versus what the customer is responsible for.

    • Example: If the organization provides an AI diagnostic tool, the document could clarify that while the organization is responsible for maintaining model accuracy, the customer (e.g., healthcare provider) is responsible for overseeing its appropriate use.
  4. Compliance and Contractual Requirements: Information on any contractual or regulatory obligations related to the AI system that align with customer requirements. This includes data privacy, security, and other compliance aspects that may be expected by the customer.

    • Example: A section that outlines GDPR compliance for EU customers and how the AI system adheres to these regulations.
  5. Usage Guidelines and Limitations: Clear guidance for customers on the intended use of the AI system, including any limitations or conditions under which the system should be used. This helps ensure the customer understands appropriate and inappropriate uses of the system.

    • Example: For a machine learning tool used for predictive analytics, include usage guidelines that highlight acceptable data types and scenarios where predictions may be less reliable.
  6. Feedback and Support Mechanisms: A process for customers to provide feedback, report issues, and receive support related to the AI system. This section should address how the organization will engage with customers to continuously improve the AI system and address any issues.

    • Example: Instructions for a dedicated support channel where customers can report system inaccuracies or request explanations.

How to Fulfill in Validaitor Platform

  • Upload Document: Prepare a concise document covering these points and upload it in the Validaitor platform to meet the requirement.