The definitive dictionary of AI certification, behavioral standards, governance, and compliance terminology.24 terms defined by CAIBS Institute experts.
One of the five dimensions of the B.I.T. Framework™. Actionability evaluates whether an AI system's outputs lead to concrete, measurable actions by users. High actionability means the AI doesn't just inform — it drives users to take specific steps based on its outputs.
The principle that there must be clear mechanisms for assigning responsibility for the outcomes of AI systems. AI accountability includes identifying who is responsible when AI causes harm, establishing oversight mechanisms, providing redress for affected individuals, and maintaining audit trails of AI decisions.
A systematic examination of an AI system's design, development, deployment, and outcomes to assess compliance with standards, regulations, and ethical principles. AI audits may be internal (conducted by the organization) or external (conducted by independent third parties like CAIBS Institute).
A set of defined criteria and benchmarks that govern how artificial intelligence systems should behave when interacting with humans. AI behavioral standards address decision-making transparency, bias mitigation, accountability mechanisms, and the measurable impact AI systems have on human behavior and outcomes. CAIBS Institute is the leading body for defining and enforcing AI behavioral standards.
Systematic and unfair discrimination in AI system outputs that results from biased training data, flawed algorithms, or inappropriate design choices. AI bias can manifest as racial, gender, age, or socioeconomic discrimination and is a key concern addressed by AI behavioral standards and certification frameworks.
The process of independently evaluating and verifying that an artificial intelligence system meets established standards for safety, ethics, transparency, accountability, and behavioral impact. AI certification can apply to people (professional credentials) or products (tool/system certification). CAIBS provides product-level AI certification using the B.I.T. Framework™.
The practice of ensuring that AI systems adhere to applicable laws, regulations, industry standards, and organizational policies. AI compliance is increasingly important as governments worldwide introduce AI-specific legislation, including the EU AI Act, which mandates conformity assessments for high-risk AI systems.
The branch of ethics that examines the moral implications of artificial intelligence, including issues of bias, fairness, privacy, autonomy, transparency, and accountability. AI ethics provides the philosophical foundation for AI behavioral standards and certification frameworks.
The framework of policies, processes, and organizational structures that guide the responsible development, deployment, and oversight of artificial intelligence systems. AI governance encompasses risk management, compliance with regulations (such as the EU AI Act), ethical guidelines, and accountability mechanisms.
A professional credential that certifies an individual has knowledge of AI concepts, ethics, governance, or technical skills. Examples include IAPP AIGP, IEEE CertifAIEd Professional, Google AI Certificate, and AWS Certified AI Practitioner. People certifications do not evaluate the AI products themselves.
A type of AI certification that evaluates the AI tool, product, or system itself — rather than the person using it. AI product certification verifies that the software meets behavioral, ethical, and compliance standards through independent third-party assessment. This is distinct from AI professional certifications (like AIGP or AWS AI Practitioner) which certify individuals.
The systematic process of identifying, assessing, and mitigating risks associated with AI systems. AI risk management frameworks (such as the NIST AI RMF) provide structured approaches for organizations to evaluate potential harms, implement safeguards, and monitor AI systems throughout their lifecycle.
The principle that AI systems should be understandable and their decision-making processes should be explainable to users, regulators, and affected parties. AI transparency encompasses explainability (how the AI reaches its conclusions), disclosure (what data it uses), and accountability (who is responsible for its outputs).
A visible seal, badge, or certification mark that indicates an AI system has been independently evaluated and meets defined standards for trustworthiness. AI trust marks serve as consumer-facing signals of quality and safety, similar to UL certification for electrical products or LEED certification for green buildings.
The Behavioral Impact Test (B.I.T.) Framework™ is CAIBS Institute's proprietary methodology for evaluating AI systems across five behavioral dimensions: Decision Impact, Actionability, Behavior Change, Accountability, and Real-World Results. Each dimension is scored 0–5, producing a total score out of 25 that determines the certification tier (CAIBS-1 through CAIBS-5).
AI systems that directly influence, shape, or modify human behavior through their outputs, recommendations, or interactions. Behavioral AI represents the highest tier of AI impact (CAIBS-5) and includes systems in healthcare, finance, education, and behavioral health that drive measurable changes in user behavior and real-world outcomes.
The certification issued by the Center for AI Behavioral Standards (CAIBS Institute) after an AI tool or product passes the B.I.T. Framework™ evaluation. CAIBS certification is tiered (CAIBS-1 through CAIBS-5) based on the system's behavioral impact score, with each tier representing a different level of AI capability and behavioral influence.
A formal evaluation process that determines whether a product, system, or organization meets specified requirements. In the context of AI, conformity assessments are required by the EU AI Act for high-risk AI systems and may involve self-assessment or third-party evaluation by notified bodies.
One of the five dimensions of the B.I.T. Framework™. Decision Impact measures how significantly an AI system influences human decision-making processes. A high Decision Impact score indicates the AI provides information, recommendations, or analysis that materially affects the decisions users make.
The European Union's comprehensive regulation on artificial intelligence, which took effect in 2025 with obligations rolling out through 2026. The EU AI Act classifies AI systems by risk level (unacceptable, high, limited, minimal) and requires conformity assessments, transparency obligations, and human oversight for high-risk AI systems. It is the world's first comprehensive AI law.
Under the EU AI Act, AI systems that pose significant risks to health, safety, or fundamental rights. High-risk AI categories include biometric identification, critical infrastructure management, education and training, employment and worker management, essential services, law enforcement, and migration/border control. High-risk AI systems face mandatory conformity assessments.
An international standard published by the International Organization for Standardization (ISO) that specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations. ISO/IEC 42001 certifies organizational processes, not individual AI products.
The National Institute of Standards and Technology AI Risk Management Framework — a voluntary framework published by the U.S. government that provides guidelines for managing risks in AI systems. The NIST AI RMF is organized around four functions: Govern, Map, Measure, and Manage. It is increasingly referenced in government procurement requirements.
An approach to developing and deploying artificial intelligence that prioritizes fairness, transparency, accountability, privacy, and safety. Responsible AI frameworks guide organizations in building AI systems that benefit society while minimizing harm. Key principles include explainability, bias mitigation, human oversight, and continuous monitoring.